{
 "cells": [
  {
   "cell_type": "raw",
   "id": "bb463b9e",
   "metadata": {},
   "source": [
    "聚类：\n",
    "        作用:\n",
    "            是在数据没有类别的情况下来通过数据之间的相似性来为数据聚出类别(簇)。聚出的类别实际上没有类别名，会暂时使用0,1,2...这样的数\n",
    "            值来表示类别，这些类别名还是要由用户指定\n",
    "        关键词：\n",
    "            簇: 就是一个类别。和分类中的类一个意思\n",
    "            离群点：指的就是异常值、噪声点\n",
    "        应用场景：\n",
    "            1.在未知数据类别的情况下为数据划分出类别\n",
    "            2.作为分类算法的前置算法，为分类算法聚出类别。后续分类算法可以按照聚出的类别分类\n",
    "            3.异常值检测\n",
    "        聚类模型评估：\n",
    "            内部指标：不同的模型之间进行比较从而来确定哪个模型更好\n",
    "                内部指标的评估方式：轮廓系数、兰德系数、手肘法\n",
    "            外部指标：和业务专家的聚类结果进行比较，外部指标的评估会更好\n",
    "        具体算法：\n",
    "            分类算法的依据：\n",
    "                KNN：依据距离分类\n",
    "                朴素贝叶斯：依据概率分类\n",
    "                决策树：生成一个做决策的树\n",
    "                svm:最优分类超平面\n",
    "            聚类算法的依据：都和距离有关\n",
    "                基于原型(距离)的聚类算法:\n",
    "                    k-means(K均值):\n",
    "                        1.首先来指定K值，K值是要将数据聚成多少个类别，最少为2。需要不断更换不同的数来确定多少更好\n",
    "                        2.在数据中随机选择K个点。作为中心点(质心)。\n",
    "                        3.计算每个点到质心的距离，将其划分到距离最近的中心点中，作为一簇。最终划分出K个类别。\n",
    "                        4.重新计算每个簇新的中心点(计算均值)\n",
    "                        5.重复进行第三、第四步。直到聚类的结果不在发生变化。\n",
    "                        6.停止聚类。\n",
    "                        优点：\n",
    "                            算法原理简单、运行速度快易于理解\n",
    "                            在大数据集表现良好\n",
    "                        缺点：\n",
    "                            K值难以指定\n",
    "                            因为其质心的选择可以是样本中不存在的点。所以导致算法容易受异常值的影响。\n",
    "                        优化：Kmeans++算法，其原理是选择质心的时候尽可能选择距离较远的点作为质心。\n",
    "                    k-mediods(K中心点算法):\n",
    "                        其可以解决Kmeans算法均值中容易受异常值影响的问题。\n",
    "                        1.首先来指定K值，K值是要将数据聚成多少个类别，最少为2。需要不断更换不同的数来确定多少更好\n",
    "                        2.在数据中随机选择K个点。作为中心点(质心)。\n",
    "                        3.计算每个点到质心的距离，将其划分到距离最近的中心点中，作为一簇。最终划分出K个类别。\n",
    "                        4.计算簇中距离其他样本点距离绝对误差最小的点作为新的质心(中心点)\n",
    "                        5.重复进行第三、第四步。直到聚类的结果不在发生变化。\n",
    "                        6.停止聚类。\n",
    "                        优点：\n",
    "                            对异常值不敏感\n",
    "                            原理简单易于理解\n",
    "                        缺点：\n",
    "                            计算复杂度增加了，导致算法不适用于大数据集\n",
    "                            K值难以选取\n",
    "                基于层次的聚类算法:\n",
    "                    原理：尝试对数据进行类别划分，通过距离来将数据构建成一个树形结构。这个树形结构的构成可以自下而上(凝聚层次聚类)，也可以是\n",
    "                          自上而下(分裂层次聚类)。有时候无法聚出类别\n",
    "                    brich:常用，考的多\n",
    "                        优点：\n",
    "                            1.算法可以适用于分布形状特殊的数据集(环抱型、凹型)\n",
    "                        缺点：\n",
    "                            1.算法的复杂度增加\n",
    "                            2.资源消耗严重，不适用于大数据集\n",
    "                    agnes:\n",
    "                        优点：\n",
    "                            1.算法可以适用于分布形状特殊的数据集(环抱型、凹型)\n",
    "                        缺点：\n",
    "                            1.算法的复杂度增加\n",
    "                            2.资源消耗严重，不适用于大数据集\n",
    "                    cure:\n",
    "                        优点：\n",
    "                            1.算法可以适用于分布形状特殊的数据集(环抱型、凹型)\n",
    "                        缺点：\n",
    "                            1.算法的复杂度增加\n",
    "                            2.资源消耗严重，不适用于大数据集\n",
    "                基于密度的聚类算法:\n",
    "                    dbscan:\n",
    "                        原理：DBSCAN算法就是以最小样本数来向周围画圆，不停得筛选 \n",
    "                        优点：\n",
    "                            1.算法可以适用于分布形状特殊的数据集(环抱型、凹型)\n",
    "                            2.对异常值不敏感\n",
    "                            3.不需要指定类别数，就是在初始化建模的时候，没有指定类别数的参数，只和半径有关，以几个点作为样本数，往旁边画圆\n",
    "                        缺点：\n",
    "                            1.计算量比较大\n",
    "                            2.不太适合高维数据\n",
    "                    optics：\n",
    "                        和dbscan差不多，也是指定半径和最小样本数\n",
    "                谱聚类算法：\n",
    "            聚类算法的注意点：\n",
    "                1.不对数据集进行划分，比如横向、纵向划分\n",
    "                2.所有的数必须是数值类型(目前考试中所有的数据都是数值类型)\n",
    "                3.不能有缺失值\n",
    "            聚类算法的考察内容：\n",
    "                1.读取数据\n",
    "                2.特征缩放\n",
    "                3.建模/评估，在这过程中可能会要求做参数的调整(比如Kmeans的K值、dbscan中的eps半径，一般通过for循环去调)\n",
    "                4.绘图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6c93c32",
   "metadata": {},
   "source": [
    "1.读取数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "f1f48951",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "81ca88e3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>客户编号</th>\n",
       "      <th>套餐品牌</th>\n",
       "      <th>信用等级</th>\n",
       "      <th>是否使用4GUSIM卡</th>\n",
       "      <th>是否4G资费</th>\n",
       "      <th>网龄</th>\n",
       "      <th>当月ARPU</th>\n",
       "      <th>current_month_MOU</th>\n",
       "      <th>current_month_DOU</th>\n",
       "      <th>终端使用时间（月）</th>\n",
       "      <th>当月省内漫游时长</th>\n",
       "      <th>当月省际漫游时长</th>\n",
       "      <th>营销是否成功</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10942</td>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>204</td>\n",
       "      <td>2201.08</td>\n",
       "      <td>2611</td>\n",
       "      <td>54557</td>\n",
       "      <td>22</td>\n",
       "      <td>42</td>\n",
       "      <td>1528</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>13382</td>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>201</td>\n",
       "      <td>2181.71</td>\n",
       "      <td>3371</td>\n",
       "      <td>35250</td>\n",
       "      <td>15</td>\n",
       "      <td>24</td>\n",
       "      <td>1120</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>4192</td>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>167</td>\n",
       "      <td>2055.60</td>\n",
       "      <td>6913</td>\n",
       "      <td>5884426</td>\n",
       "      <td>1</td>\n",
       "      <td>2708</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10908</td>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>171</td>\n",
       "      <td>1827.33</td>\n",
       "      <td>2157</td>\n",
       "      <td>178070</td>\n",
       "      <td>4</td>\n",
       "      <td>260</td>\n",
       "      <td>15</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>14130</td>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>216</td>\n",
       "      <td>1736.40</td>\n",
       "      <td>4218</td>\n",
       "      <td>358592</td>\n",
       "      <td>35</td>\n",
       "      <td>28</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    客户编号  套餐品牌  信用等级  是否使用4GUSIM卡  是否4G资费   网龄   当月ARPU  current_month_MOU  \\\n",
       "0  10942     2     5            0       1  204  2201.08               2611   \n",
       "1  13382     2     5            0       0  201  2181.71               3371   \n",
       "2   4192     2     5            1       1  167  2055.60               6913   \n",
       "3  10908     2     5            1       0  171  1827.33               2157   \n",
       "4  14130     2     5            0       1  216  1736.40               4218   \n",
       "\n",
       "   current_month_DOU  终端使用时间（月）  当月省内漫游时长  当月省际漫游时长  营销是否成功  \n",
       "0              54557         22        42      1528       0  \n",
       "1              35250         15        24      1120       0  \n",
       "2            5884426          1      2708         0       1  \n",
       "3             178070          4       260        15       1  \n",
       "4             358592         35        28         0       1  "
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_csv('./data.csv',encoding='gbk')  # encoding表示使用的字符集编码，默认是utf-8常用的是utf-8或者gbk\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae626a32",
   "metadata": {},
   "source": [
    "2.使用最小值最大值归一化对数据进行归一化   ----特征缩放"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "ac561837",
   "metadata": {},
   "outputs": [],
   "source": [
    "from  sklearn.preprocessing import MinMaxScaler\n",
    "\n",
    "model =MinMaxScaler()\n",
    "df_mm = model.fit_transform(df)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b60ddce",
   "metadata": {},
   "source": [
    "3.使用Kmeans算法进行聚类分别设置K值为2到13,并输出轮廓系数评分，再将K值与轮廓系数绘制折现线图进行展示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "bd7ee5a1",
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "K值为2时评分为: 0.9511869651561335\n",
      "K值为3时评分为: 0.8323526323528379\n",
      "K值为4时评分为: 0.7844204055348635\n",
      "K值为5时评分为: 0.7246626891913978\n",
      "K值为6时评分为: 0.6991207532065212\n",
      "K值为7时评分为: 0.6710662891152118\n",
      "K值为8时评分为: 0.6561411094632521\n",
      "K值为9时评分为: 0.6565404039202589\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Cell \u001b[1;32mIn[4], line 11\u001b[0m\n\u001b[0;32m      9\u001b[0m \u001b[38;5;66;03m# 训练\u001b[39;00m\n\u001b[0;32m     10\u001b[0m model_kmean\u001b[38;5;241m.\u001b[39mfit(df)\n\u001b[1;32m---> 11\u001b[0m sh_s \u001b[38;5;241m=\u001b[39m \u001b[43msilhouette_score\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdf\u001b[49m\u001b[43m,\u001b[49m\u001b[43mmodel_kmean\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlabels_\u001b[49m\u001b[43m)\u001b[49m  \u001b[38;5;66;03m#model_kmean.labels_数据类别\u001b[39;00m\n\u001b[0;32m     12\u001b[0m k_list\u001b[38;5;241m.\u001b[39mappend(k)\n\u001b[0;32m     13\u001b[0m sh_score\u001b[38;5;241m.\u001b[39mappend(sh_s)\n",
      "File \u001b[1;32m~\\AppData\\Roaming\\Python\\Python310\\site-packages\\sklearn\\metrics\\cluster\\_unsupervised.py:117\u001b[0m, in \u001b[0;36msilhouette_score\u001b[1;34m(X, labels, metric, sample_size, random_state, **kwds)\u001b[0m\n\u001b[0;32m    115\u001b[0m     \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m    116\u001b[0m         X, labels \u001b[38;5;241m=\u001b[39m X[indices], labels[indices]\n\u001b[1;32m--> 117\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m np\u001b[38;5;241m.\u001b[39mmean(silhouette_samples(X, labels, metric\u001b[38;5;241m=\u001b[39mmetric, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwds))\n",
      "File \u001b[1;32m~\\AppData\\Roaming\\Python\\Python310\\site-packages\\sklearn\\metrics\\cluster\\_unsupervised.py:237\u001b[0m, in \u001b[0;36msilhouette_samples\u001b[1;34m(X, labels, metric, **kwds)\u001b[0m\n\u001b[0;32m    233\u001b[0m kwds[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmetric\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m metric\n\u001b[0;32m    234\u001b[0m reduce_func \u001b[38;5;241m=\u001b[39m functools\u001b[38;5;241m.\u001b[39mpartial(\n\u001b[0;32m    235\u001b[0m     _silhouette_reduce, labels\u001b[38;5;241m=\u001b[39mlabels, label_freqs\u001b[38;5;241m=\u001b[39mlabel_freqs\n\u001b[0;32m    236\u001b[0m )\n\u001b[1;32m--> 237\u001b[0m results \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mzip\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mpairwise_distances_chunked\u001b[49m\u001b[43m(\u001b[49m\u001b[43mX\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreduce_func\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreduce_func\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwds\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m    238\u001b[0m intra_clust_dists, inter_clust_dists \u001b[38;5;241m=\u001b[39m results\n\u001b[0;32m    239\u001b[0m intra_clust_dists \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39mconcatenate(intra_clust_dists)\n",
      "File \u001b[1;32m~\\AppData\\Roaming\\Python\\Python310\\site-packages\\sklearn\\metrics\\pairwise.py:1826\u001b[0m, in \u001b[0;36mpairwise_distances_chunked\u001b[1;34m(X, Y, reduce_func, metric, n_jobs, working_memory, **kwds)\u001b[0m\n\u001b[0;32m   1824\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m reduce_func \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m   1825\u001b[0m     chunk_size \u001b[38;5;241m=\u001b[39m D_chunk\u001b[38;5;241m.\u001b[39mshape[\u001b[38;5;241m0\u001b[39m]\n\u001b[1;32m-> 1826\u001b[0m     D_chunk \u001b[38;5;241m=\u001b[39m \u001b[43mreduce_func\u001b[49m\u001b[43m(\u001b[49m\u001b[43mD_chunk\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43msl\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstart\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m   1827\u001b[0m     _check_chunk_size(D_chunk, chunk_size)\n\u001b[0;32m   1828\u001b[0m \u001b[38;5;28;01myield\u001b[39;00m D_chunk\n",
      "File \u001b[1;32m~\\AppData\\Roaming\\Python\\Python310\\site-packages\\sklearn\\metrics\\cluster\\_unsupervised.py:137\u001b[0m, in \u001b[0;36m_silhouette_reduce\u001b[1;34m(D_chunk, start, labels, label_freqs)\u001b[0m\n\u001b[0;32m    135\u001b[0m clust_dists \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39mzeros((\u001b[38;5;28mlen\u001b[39m(D_chunk), \u001b[38;5;28mlen\u001b[39m(label_freqs)), dtype\u001b[38;5;241m=\u001b[39mD_chunk\u001b[38;5;241m.\u001b[39mdtype)\n\u001b[0;32m    136\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m i \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(\u001b[38;5;28mlen\u001b[39m(D_chunk)):\n\u001b[1;32m--> 137\u001b[0m     clust_dists[i] \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[43mnp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbincount\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m    138\u001b[0m \u001b[43m        \u001b[49m\u001b[43mlabels\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweights\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mD_chunk\u001b[49m\u001b[43m[\u001b[49m\u001b[43mi\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mminlength\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mlen\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mlabel_freqs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m    139\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m    141\u001b[0m \u001b[38;5;66;03m# intra_index selects intra-cluster distances within clust_dists\u001b[39;00m\n\u001b[0;32m    142\u001b[0m intra_index \u001b[38;5;241m=\u001b[39m (np\u001b[38;5;241m.\u001b[39marange(\u001b[38;5;28mlen\u001b[39m(D_chunk)), labels[start : start \u001b[38;5;241m+\u001b[39m \u001b[38;5;28mlen\u001b[39m(D_chunk)])\n",
      "File \u001b[1;32m<__array_function__ internals>:180\u001b[0m, in \u001b[0;36mbincount\u001b[1;34m(*args, **kwargs)\u001b[0m\n",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from sklearn.cluster import KMeans\n",
    "from sklearn.metrics import silhouette_score  # 轮廓系数   silhouette_score\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "k_list,sh_score= [],[]\n",
    "# 设置模型参数\n",
    "for k in range(2,13):\n",
    "    model_kmean = KMeans(n_clusters=k)  # n_clusters表示要聚成几个类别就是K值   init：初始化质心(中心点)算法为...  max_iter：最大迭代次数\n",
    "    # 训练\n",
    "    model_kmean.fit(df)\n",
    "    sh_s = silhouette_score(df,model_kmean.labels_)  #model_kmean.labels_数据类别\n",
    "    k_list.append(k)\n",
    "    sh_score.append(sh_s)\n",
    "    # 评分\n",
    "    print('K值为'+str(k)+'时评分为:',sh_s)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d3f8ea68",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 绘制k值与轮廓系数之间的折线图  可能会出现考题\n",
    "plt.rc(\"font\",family='YouYuan')  # 指定字体\n",
    "plt.plot(k_list,sh_score)\n",
    "plt.xlabel('K值')\n",
    "plt.ylabel('silhouette_score')\n",
    "plt.title('silhouette_score line chart')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03ec5a4e",
   "metadata": {},
   "source": [
    "4.确定K值后，使用确定的k值重新建模"
   ]
  },
  {
   "cell_type": "raw",
   "id": "462f8c64",
   "metadata": {},
   "source": [
    "考题：使用最优模型的最优参数对数据进行预测，并将预测的数据和原始数据进行合并\n",
    "    最优模型和最优参数可以自己随便选一个，考试的时候也这样，但是当kmeans算法和dbscan算法同时出现的时候，最好选kmeans宣发"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bf23c2de",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_kmeans = KMeans(n_clusters=2)\n",
    "model_kmeans.fit(df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fcb64837",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 为原本用户增加上类别名   将预测的数据和原始数据进行合并\n",
    "df['label'] = model_kmeans.labels_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cc3251c9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 预测\n",
    "model_kmean.predict(df_mm)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1d8c66c",
   "metadata": {},
   "source": [
    "5.保存模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ffce9a3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import joblib\n",
    "joblib.dump(model_kmeans,'./电信用户聚类.pkl')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c755dfc",
   "metadata": {},
   "source": [
    "6.使用K-mediods建模"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3ceb6429",
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install pyclustering"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "70013838",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyclustering.cluster.kmedoids import kmedoids"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "864cfcc1",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_s = df.sample(200)   #为了让算法执行的快点，就随机抽两百条     可以看作是过采样"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5c61bf01",
   "metadata": {},
   "outputs": [],
   "source": [
    "kmedoids_instance = kmedoids(data=df_s.values,initial_index_medoids=[16, 50, 7])  # initial_index_medoids：初始中心点的索引，指定几个中心点就聚成几类\n",
    "# 进行聚类\n",
    "kmedoids_instance.process()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5394f3b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 获取聚类结果\n",
    "clusters = kmedoids_instance.get_clusters()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a394264",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 获取中心点\n",
    "medoids = kmedoids_instance.get_medoids() \n",
    "medoids"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0a55100",
   "metadata": {},
   "source": [
    "7.使用agnes算法进行建模-基于层次的聚类算法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "917a943a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.cluster  import AgglomerativeClustering"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1fe56f62",
   "metadata": {},
   "outputs": [],
   "source": [
    "k_list,sh_score= [],[]\n",
    "# 设置模型参数\n",
    "for k in range(2,13):\n",
    "    # 设置参数\n",
    "    model_agnes = AgglomerativeClustering(n_clusters=k)\n",
    "    # 模型训练\n",
    "    model_agnes.fit(df_mm)\n",
    "    sh_s = silhouette_score(df_mm,model_agnes.labels_)\n",
    "    k_list.append(k)\n",
    "    sh_score.append(sh_s)\n",
    "    # 评分\n",
    "    print('K值为'+str(k)+'时评分为:',sh_s)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ff3d937",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 绘制k值与轮廓系数之间的折线图\n",
    "plt.rc(\"font\",family='YouYuan')\n",
    "plt.plot(k_list,sh_score)\n",
    "plt.xlabel('K值')\n",
    "plt.ylabel('silhouette_score')\n",
    "plt.title('silhouette_score line chart')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c6da401",
   "metadata": {},
   "source": [
    "8.使用birch算法建模"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "52ef4442",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.cluster import Birch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "447e1537",
   "metadata": {},
   "outputs": [],
   "source": [
    "k_list,sh_score= [],[]\n",
    "# 设置模型参数\n",
    "for k in range(2,13):\n",
    "#     # 设置算法参数\n",
    "    model_birch = Birch(n_clusters=k)\n",
    "#     # 模型训练\n",
    "    # np.ascontiguousarray:如果不用这个方法转一下，会报：ValueError: ndarray is not C-contiguous in cython\n",
    "    # Numpy数组有两种存储方式：C存储方式和Fortran存储方式。C存储方式是按照行-major顺序，即行优先存储。Fortran存储方式是按照列-major顺序，\n",
    "    # 即列优先存储。在Cython中，如果一个numpy数组是Fortran存储方式的，那么很可能会出现ValueError: ndarray is not C-contiguous\n",
    "    # 这种错误，视频中未加这个np.ascontiguousarray方法\n",
    "    model_birch.fit(np.ascontiguousarray(df))\n",
    "    sh_s = silhouette_score(df,model_birch.labels_)\n",
    "    k_list.append(k)\n",
    "    sh_score.append(sh_s)\n",
    "    # 评分\n",
    "    print('K值为'+str(k)+'时评分为:',sh_s)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9a40272b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 绘制k值与轮廓系数之间的折线图\n",
    "plt.rc(\"font\",family='YouYuan')\n",
    "plt.plot(k_list,sh_score)\n",
    "plt.xlabel('K值')\n",
    "plt.ylabel('silhouette_score')\n",
    "plt.title('silhouette_score line chart')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4f6e299",
   "metadata": {},
   "source": [
    "9.使用DBscan算法进行建模"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f1e7a0de",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.cluster import DBSCAN"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80b5e45b",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ds = df.sample(1000) \n",
    "model_dbscan = DBSCAN(eps=30,min_samples=1)    # eps：半径   圆得半径   min_samples：最小样本数   DBSCAN算法就是以最小样本数来向周围画圆\n",
    "model_dbscan.fit(df_ds)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bb22113",
   "metadata": {},
   "outputs": [],
   "source": [
    "#silhouette_score：用轮廓数必须保证model_dbscan.labels_中的类别数>2,不然会报错，解决办法：1.用上面随机采样   2.多运行几次可能得出得类别\n",
    "# 数不同，3.调DBSCAN方法中的半径或者最小样本数\n",
    "silhouette_score(df_ds,model_dbscan.labels_)   "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84a3655c",
   "metadata": {},
   "source": [
    "10.使用OPTICS算法聚类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4937d2a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.cluster import OPTICS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bdb0bb0",
   "metadata": {},
   "outputs": [],
   "source": [
    "optics_model = OPTICS(min_samples=50)\n",
    "optics_model.fit(df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6f207fa9",
   "metadata": {},
   "outputs": [],
   "source": [
    "silhouette_score(df,optics_model.labels_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cb19065d",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
