{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "e90af850",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "#  26\\.  层次聚类方法实现与应用  # \n",
    "\n",
    "##  26.1.  介绍  # \n",
    "\n",
    "前面的实验中，我们学习了划分聚类方法，特别是了解掌握了 K-Means 和 K-Means++ 两种出色的聚类算法。本次实验将介绍一类完全不同的聚类方法，也就是层次聚类法。层次聚类可以帮助有效避免划分聚类时需要提前指定类别数量的麻烦，除此之外还能在一些特定场景中得到应用。 \n",
    "\n",
    "##  26.2.  知识点  # \n",
    "\n",
    "  * 层次聚类方法概述 \n",
    "\n",
    "  * 自底向上层次聚类法 \n",
    "\n",
    "  * 自顶向下层次聚类法 \n",
    "\n",
    "  * BIRCH 聚类算法 \n",
    "\n",
    "  * PCA 主成分分析 \n",
    "\n",
    "##  26.3.  层次聚类方法概述  # \n",
    "\n",
    "上节课的内容中，我们学习了划分聚类方法，并着重介绍了其中的 K-Means 算法。K-Means 算法可以说是用处非常广泛的聚类算法之一，它非常好用。但是，当你使用过这种算法之后，你就会发现一个比较让人「头疼」的问题，那就是我们需要手动指定 K 值，也就是聚类的类别数量。 \n",
    "\n",
    "预先确定聚类的类别数量看起来是个小事情，但是在很多时候是比较麻烦的，因为我们可能在聚类前并不知道数据集到底要被聚成几类。例如，下面的示意图中，感觉聚成 2 类或者 4 类都是比较合理的。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805745200.png)\n",
    "\n",
    "今天要学习的层次聚类方法与划分聚类最大的区别之一，就是我们无需提前指定需要聚类类别的数量。这听起来是非常诱人的，到底是怎样做到的呢？ \n",
    "\n",
    "简单来讲，层次聚类方法的特点在于通过计算数据集元素间的相似度来生成一颗具备层次结构的树。首先在这里强调一点，这里的「层次树」和第二周课程中学习的「决策树」是完全不同，不要混淆。 \n",
    "\n",
    "与此同时，当我们使用层次聚类方法时，可以根据创建树的方式分为两种情况： \n",
    "\n",
    "  * 自底向上层次聚类法：该方法的过程被称为「凝聚」Agglomerative，也就是把数据集中的每个元素看作是一个类别，然后进行迭代合并成为更大的类别，直到满足某个终止条件。 \n",
    "\n",
    "  * 自顶向下层次聚类法：该方法的过程被称为「分裂」Divisive，也就是凝聚的反向过程。首先，把数据集看作是一个类别，然后递归地划分为多个较小的子类，直到满足某个终止条件。 \n",
    "\n",
    "##  26.4.  自底向上层次聚类法  # \n",
    "\n",
    "自底向上层次聚类法也就是 Agglomerative Clustering 算法。这种算法的主要特点在于，我们使用「自底向上」进行聚类的思路来帮助距离相近的样本被放在同一类别中。具体来讲，这种方法的主要步骤如下： \n",
    "\n",
    "对于数据集  $D$  ，  $D=\\left ( {x_1,x_2,\\cdots,x_n} \\right )$  ： \n",
    "\n",
    "  1. 将数据集中每个样本标记为 1 类，即  $D$  初始时包含的类别（Class）为  $C$  ，  $C=\\left ( {c_1,c_2,\\cdots,c_n} \\right )$  。 \n",
    "\n",
    "  2. 计算并找出  $C$  中距离最近的 2 个类别，合并为 1 类。 \n",
    "\n",
    "  3. 依次合并直到最后仅剩下一个列表，即建立起一颗完整的层次树。 \n",
    "\n",
    "我们通过下图来演示自底向上层次聚类法的过程，首先平面上有 5 个样本点，我们将每个样本点都单独划为 1 类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805745379.png)\n",
    "\n",
    "接下来，我们可以计算元素间的距离，并将距离最近的合并为 1 类。于是，总类别变为 3 类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805745593.png)\n",
    "\n",
    "重复上面的步骤，总类别变为 2 类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805745835.png)\n",
    "\n",
    "最后，合并为 1 类，聚类终止。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805746050.png)\n",
    "\n",
    "我们将上面的聚类过程变为层次树就为： \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805746305.png)\n",
    "\n",
    "看完上面的演示过程，你会发现「自底向上层次聚类法」原来这么简单呀。 \n",
    "\n",
    "##  26.5.  距离计算方法  # \n",
    "\n",
    "虽然聚类过程看似简单，但不知道你是否意识到一个问题：当一个类别中包含多个元素时，类别与类别之间的距离是怎样确定的呢？ \n",
    "\n",
    "也就是说，上面的演示过程中，为什么要把  $5$  归为  $<3, 4>$  组成的那一类，而不是  $<1, 2>$  组成的类呢？或者说，为什么不先把  $<1, 2>$  与  $<3, 4>$  合并，最后才合并  $ 5$  呢？ \n",
    "\n",
    "这就涉及到 Agglomerative 聚类过程中的距离计算方式。简单来讲，我们一般有 3 种不同的距离计算方式： \n",
    "\n",
    "##  26.6.  单连接（Single-linkage）  # \n",
    "\n",
    "单连接的计算方式是根据两种类别之间最近的元素间距离作为两类别之间的距离。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805746499.png)\n",
    "\n",
    "##  26.7.  全连接（Complete-linkage）  # \n",
    "\n",
    "全连接的计算方式是根据两种类别之间最远的元素间距离作为两类别之间的距离。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805746709.png)\n",
    "\n",
    "##  26.8.  平均连接（Average-linkage）  # \n",
    "\n",
    "平均连接的计算方式是依次计算两种类别之间两两元素间距离，并最终求得平均值作为两类别之间的距离。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805746929.png)\n",
    "\n",
    "##  26.9.  中心连接（Center-linkage）  # \n",
    "\n",
    "平均连接虽然看起来更加合理，但是两两元素间的距离计算量往往非常庞大。有时候，也可以使用中心连接计算方法。即先计算类别中心，再以中心连线作为两类别之间的距离。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747181.png)\n",
    "\n",
    "总之，上面 4 种距离计算方法中，一般常用「平均连接」和「中心连接」方法，因为「单连接」和「全连接」都相对极端，容易受到噪声点和分布不均匀数据造成的干扰。 \n",
    "\n",
    "##  26.10.  Agglomerative 聚类 Python 实现  # \n",
    "\n",
    "下面，我们尝试通过 Python 实现自底向上层次聚类算法。 \n",
    "\n",
    "通过 ` make_blobs  ` 方法随机生成一组示例数据，且使得示例数据呈现出 2 类数据的趋势。这里，我们设定随机数种子 ` random_state=10  ` 以保证你的结果和实验结果一致。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "76df6649",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(array([[  6.04774884, -10.30504657],\n",
       "        [  2.90159483,   5.42121526],\n",
       "        [  4.1575017 ,   3.89627276],\n",
       "        [  1.53636249,   5.11121453],\n",
       "        [  3.88101257,  -9.59334486],\n",
       "        [  1.70789903,   6.00435173],\n",
       "        [  5.69192445,  -9.47641249],\n",
       "        [  5.4307043 ,  -9.75956122],\n",
       "        [  5.85943906,  -8.38192364],\n",
       "        [  0.69523642,   3.23270535]]),\n",
       " array([0, 1, 1, 1, 0, 1, 0, 0, 0, 1]))"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn import datasets\n",
    "\n",
    "data = datasets.make_blobs(10, n_features=2, centers=2, random_state=10)\n",
    "data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bce888a6",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "使用 Matplotlib 绘制示例数据结果，可以看到数据的确呈现出 2 种类别的趋势。其中 ` data[1]  ` 的结果即为生成数据时预设的类别，当然接下来的聚类过程，我们是不知道数据的预设类别。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "d3d3e3f7",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.collections.PathCollection at 0x1eafb471310>"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAioAAAGdCAYAAAA8F1jjAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjEsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvc2/+5QAAAAlwSFlzAAAPYQAAD2EBqD+naQAAJa9JREFUeJzt3Ql8VOW9//HvmclCgCRsQUAighsiIAiIiKIsL1DRSmtRK1hQLlctIrgVov5FXAherXWtoLWgvW4tXkRRRFQWFygC1YrKJlpSwi4kECAkM+f/eg5JmlQSJprMeWbyeb9ej8mcnAm/jEnON892HNd1XQEAAFgo4HcBAAAAlSGoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACslaAYFw6HlZubq9TUVDmO43c5AAAgAma/2b1796pVq1YKBALxG1RMSMnMzPS7DAAA8CPk5OSodevW8RtUTE9K6RealpbmdzkAACAC+fn5XkdD6XU8boNK6XCPCSkEFQAAYsvRpm0wmRYAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYK2Y30cFscEN7ZIOzpEb+pf3beckdZWSB8hxEv0uDQBgMYIKapXrFsrNv0868Jp5VNaJ5+6fKQWaSKl3yEn5md9lAgAs5fvQz+bNmzV8+HA1bdpUKSkp6tSpk1asWOF3WagBrlskd/d/SwdmSQqZW0hKKi5p5uH3cvNuk7v/Fb9LBQBYytceld27d6t3797q27ev5s2bp4yMDK1fv16NGzf2syzUlP0vSoeWlfSkVM7Nv0dKPl9OsEXUSgMAxAZfg8qDDz7o3ZBoxowZZcfatm3rZ0moIa4blrv/hcjP3/+qnNRxtVoTACD2+Dr088Ybb6h79+4aOnSomjdvrq5du+rZZ5/1syTUlOINkjdxturelMPC0sG3o1AUACDW+BpUNm7cqKefflonnXSS5s+frxtuuEE33XSTnn/++UqfU1hY6N0aunyDhdz82j0fAFAn+Dr0Ew6HvR6VKVOmeI9Nj8rq1as1bdo0jRgx4ojPyc7O1uTJk6NcKaot0Kh65zvVPB8AUCf42qPSsmVLdejQocKxU089VZs2bar0OVlZWcrLyytrOTk5UagU1RY8QQq2MQkkgpMDclIujkJRAIBY42uPilnxs3bt2grH1q1bpzZtzAXuyJKTk70GuzmOI9UfIXfvvRGcHZBShkahKgBArPG1R+Xmm2/WsmXLvKGfDRs26KWXXtIzzzyjMWPG+FkWakr9K6XkvlX0qpjjjpz0KXKCzaNcHAAgFvgaVHr06KHZs2fr5ZdfVseOHXXffffp0Ucf1bBhw/wsCzXEcRLkNHpSqj9SUlJJMEn4d0deoIWcRk/ISRnid6kAAEs5rutGsn7UWmbVT3p6ujdfJS0tze9yUAk3vFc6+JbckJlTlHT4Xj9J58hxfN8cuU7yfuyLVsktXGQ2sZETaCrVGywnofJhVwDw4/rNvX4QFU4g1RsKimRqLWqXW/QPuXsmSqENZtaz19Plmv1u9j0qN+lcOelT5QQz/C4TADz8OQvUIe6hz+XuukoKbSw5Eiq595J5K+nQJ3J3DZUb2ulnmQBQhqAC1BGuG5K7Z1xJMDE3iDySkBTeJnfv4b2NAMBvBBWgrjj0oRTOrSKklApJB+fRqwLACgQVoI5wD75bMiclEiHJTLQFAJ8xmRbVXy1iVu64BVKgKfufxBKz8uqovSmlAtx/CYAVCCqIiOsekva/LHf/n6XQv29x4Cb2kNNgpJQ84PButLBXIK2kE7Vk4myVwpKTHoWiAKBqDP3gqNzwPrnfX314gqW3D0o5RSvl7hkjd2/24d4WWMupNyjCkGIkSPXMrsIA4C+CCo7KzZsoFX1u3itp5ZUMJeyfKe1/0Y/yEKmkc6TgsRH82AelehfJCTSJUmEAUDmCCqrkFn8rFb4b0dwGt2CatwQWdjK7ADuNHpeUWMWPflAKtpSTdkeUqwOAIyOooErugVmRrxQJb5cOfVzbJeEncBI7yWn6ipRwcsmRYMlUNfOrwJGSz5PT5K/0pgCwBpNpUbXif1ZjpYgjFX8nJfep5aLwUziJp8lp9sbhrfQPmnv9FJTc6+ciOQmt/S4PACogqKBqzuF7wfxwbsqRuJLDt1SscBI7ew0AbMbQD6p0+EJWjdU8XPgAADWIoIKqpfwiwjkqjpTQQU5ixygUBQCoKwgqqJITaCyn4Q1HO8trTupvo1QVAKCuIKjg6BrcKDX4r5IHwSN8CyXIafSonOSzfSgOABDPmPmIozJb45veErfexXL3vyQVvm/WLUuBJnJSfi6lXC4neIzfZQIA4hBBBRFzEjvISb9fkmkAANQ+hn4AAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC3u9QMAFnPdkBTaIqlICmTICTT0uySg7vaoTJ061btT7/jx4/0uBQB85Ybz5O57Wu6OPnJ39pO7c5Dc7WcqvOc2uUVf+l0eUPeCyqeffqrp06erc+fOfpcCAL5yQ1vk7vq53H2PSeEd5T5SLB18S+6uy+QemONjhUAdCyr79u3TsGHD9Oyzz6px48Z+lwMAvnHdYrnfX1sy3BM+whkh77ibN0HuoVU+VAjUwaAyZswYDR48WAMGDDjquYWFhcrPz6/QACBuFH4ghb4pCSRVceQWTI9SUUAdnkz7yiuvaNWqVd7QTySys7M1efLkWq8LAPzg7n+l5G/II/WmlBeSChfJDW2XE2wepeqAOtajkpOTo3HjxunFF19UvXr1InpOVlaW8vLyypr5HAAQN4q/iSCklHKl0KZaLgiowz0qK1eu1Pbt23XGGWeUHQuFQlqyZImefPJJb5gnGAxWeE5ycrLXACA+Vffvx4q/I4F442tQ6d+/v7744osKx6655hq1b99eEyZM+EFIAYC4l9RFOrg1gjkq3slSwglRKAqoo0ElNTVVHTt2rHCsQYMGatq06Q+OA0Bd4NS/Su7BtyI4MyjVu0ROIC0KVQF1fNUPAKBEYncpqc9Rfj0HJCdZTsProlgYUEdX/fynRYsW+V0CAPjG7M6tRo/J3TNGOvRJyRyU8sNAjuQ0kNP4OTkJx/tYKVBHgwoA1HVOoIHU+DmpcInc/X+WilZJbrEUbCmn/pVSymVyAo38LhOICoIKAFjIccwclL5y6vX1uxTAVwSVI3BDW6XC96VwvuSkSvX6yQm28rssAADqHIJKOW5oh9z8e6XCBYc3UirdHXLvfXKT+8pJu0dOsIXfZQIAUGew6qeE2Yba3fVLqfC9kl0h3ZIJbOatKxUuPnzHUu9GYQAAIBoIKiXcvDul8PYqNlkKSeHv5e65PcqVAQBQdxFUTEgp3iQdWhLBTpAhqWi53OINUaoMAIC6jaBiHHzn8N4EEQnKPfB2LRcEAAAMgorpUQnvrsZL4UiuOR8AANQ2gkrp5krepNlIuJLTsJYrAgAABkHFSO4X4Z1KjZAc73wAAFDbCCqmRyWxg5TQOYKXIyAlnCwldolSZQAA1G0ElRJO+hTJqVdyA7DKXqokOekPHr5pGAAAqHUElRJO4slymrwqBduU27Q3+O/Ne4OZcpq+LCfxND/LBACgTmEL/XKcxFOkZvOkohVyD86XwnukQJqc5IFSUk96UgAAiDKCyn/wwkhSDzlJPfwuBQCAOo+hHwAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABr+R5UsrOz1aNHD6Wmpqp58+YaMmSI1q5d63dZAADAAr4HlcWLF2vMmDFatmyZFixYoKKiIg0cOFAFBQV+lwYAAHzmuK7ryiI7duzwelZMgOnTp89Rz8/Pz1d6erry8vKUlpYWlRoBAMBPE+n1O0GWMQUbTZo0OeLHCwsLvVb+CwUAAPHJ96Gf8sLhsMaPH6/evXurY8eOlc5pMQmstGVmZka9TgAAUAeHfm644QbNmzdPH330kVq3bh1xj4oJKwz9AAAQO2Ju6OfGG2/U3LlztWTJkkpDipGcnOw1AAAQ/3wPKqZDZ+zYsZo9e7YWLVqktm3b+l0SAACwhO9BxSxNfumllzRnzhxvL5WtW7d6x013UEpKit/lAQCAujxHxXGcIx6fMWOGRo4cedTnszwZAIDYEzNzVCyaywsAACxj1fJkAACA8ggqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsZUVQeeqpp3T88cerXr166tmzp5YvX+53SQAAwAK+B5VXX31Vt9xyiyZNmqRVq1bp9NNP16BBg7R9+3a/SwMAAHU9qDzyyCMaPXq0rrnmGnXo0EHTpk1T/fr19ac//cnv0gAAQF0OKocOHdLKlSs1YMCAfxcUCHiPly5desTnFBYWKj8/v0IDAADxydegsnPnToVCIR1zzDEVjpvHW7duPeJzsrOzlZ6eXtYyMzOjVC0AAKhzQz/VlZWVpby8vLKWk5Pjd0kAAKCWJMhHzZo1UzAY1LZt2yocN49btGhxxOckJyd7DQAAxD9fe1SSkpLUrVs3vf/++2XHwuGw97hXr15+lgYAAOp6j4phliaPGDFC3bt315lnnqlHH31UBQUF3iogAABQt/keVK644grt2LFDd999tzeBtkuXLnrnnXd+MMEWAADUPY7ruq5imFmebFb/mIm1aWlpfpcDAABq8Podc6t+AABA3UFQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLd+CynfffadRo0apbdu2SklJ0QknnKBJkybp0KFDfpUEAAAsk+DXP7xmzRqFw2FNnz5dJ554olavXq3Ro0eroKBADz/8sF9lAQAAiziu67qyxEMPPaSnn35aGzdujPg5+fn5Sk9PV15entLS0mq1PgAAUDMivX771qNyJKbYJk2aVHlOYWGh18p/oQAAID5ZM5l2w4YNeuKJJ3TddddVeV52draXwEpbZmZm1GoEAAAxHlQmTpwox3GqbGZ+SnmbN2/WBRdcoKFDh3rzVKqSlZXl9byUtpycnJr+EgAAQLzOUdmxY4d27dpV5Tnt2rVTUlKS935ubq7OP/98nXXWWZo5c6YCgeplJ+aoAAAQe3ybo5KRkeG1SJielL59+6pbt26aMWNGtUMKAACIb75NpjUhxfSktGnTxluObHpiSrVo0cKvsgAAgEV8CyoLFizwJtCa1rp16wofs2jFNAAA8JFvYy0jR470AsmRGgAAgMGkEAAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAa1kRVAoLC9WlSxc5jqPPPvvM73IAAIAlrAgqv/3tb9WqVSu/ywAAAJbxPajMmzdP7777rh5++GG/SwEAAJZJ8PMf37Ztm0aPHq3XX39d9evX97MUAABgId+Ciuu6GjlypK6//np1795d3333XcTzWUwrlZ+fX4tVAgCAuBr6mThxojcptqq2Zs0aPfHEE9q7d6+ysrKq9fmzs7OVnp5e1jIzM2v6SwAAAJZwXNO1UYN27NihXbt2VXlOu3btdPnll+vNN9/0gkupUCikYDCoYcOG6fnnn4+4R8WElby8PKWlpdXgVwIAAGqLuX6bDoejXb9rPKhEatOmTRWGbXJzczVo0CDNmjVLPXv2VOvWrWv0CwUAAPaI9Prt2xyV4447rsLjhg0bem9POOGEiEMKAACIb74vTwYAALByeXJ5xx9/vLcSCAAAoBQ9KgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAa/keVN566y317NlTKSkpaty4sYYMGeJ3SQAAwBIJfv7jr732mkaPHq0pU6aoX79+Ki4u1urVq/0sCQAAWMS3oGJCybhx4/TQQw9p1KhRZcc7dOjgV0kAAMAyvg39rFq1Sps3b1YgEFDXrl3VsmVLXXjhhUftUSksLFR+fn6FBgAA4pNvQWXjxo3e23vuuUd33XWX5s6d681ROf/88/X9999X+rzs7Gylp6eXtczMzChWDQAAYjqoTJw4UY7jVNnWrFmjcDjsnX/nnXfqsssuU7du3TRjxgzv43/9618r/fxZWVnKy8srazk5OTX9JQAAgHido3Lrrbdq5MiRVZ7Trl07bdmy5QdzUpKTk72Pbdq0qdLnmnNMAwAA8a/Gg0pGRobXjsb0oJjAsXbtWp1zzjnesaKiIn333Xdq06ZNTZcFAABikG+rftLS0nT99ddr0qRJ3jwTE07MCiBj6NChfpUFAAAs4us+KiaYJCQk6Oqrr9aBAwe8jd8++OADb1ItAACA47quqxhmlieb1T9mYq3ppQEAAPFz/fZ9C30AAIDKEFQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAYC2CCgAAsBZBBQAAWIugAgAArEVQAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVAABgLYIKAACwFkEFAABYi6ACAACsRVABAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFjL16Cybt06XXrppWrWrJnS0tJ0zjnnaOHChX6WBAAALOJrULn44otVXFysDz74QCtXrtTpp5/uHdu6daufZQEAgLoeVHbu3Kn169dr4sSJ6ty5s0466SRNnTpV+/fv1+rVq/0qCwAAWMS3oNK0aVOdcsopeuGFF1RQUOD1rEyfPl3NmzdXt27dKn1eYWGh8vPzKzQAABCfEvz6hx3H0XvvvachQ4YoNTVVgUDACynvvPOOGjduXOnzsrOzNXny5KjWCgAA4qRHxQzlmBBSVVuzZo1c19WYMWO8cPLhhx9q+fLlXmi55JJLtGXLlko/f1ZWlvLy8spaTk5OTX8JAADEhS3fbtPaFd/oX+tyvetuLHLcGq58x44d2rVrV5XntGvXzgsnAwcO1O7du70VP6XMXJVRo0Z5gScSZugnPT3dCy3lPw8AAHWR67p673+X6P8efUsb/v5t2fFjT2qhITdepMHXDVBiUqL8Fun1u8aHfjIyMrx2NGbSrGGGfMozj8PhcE2XBQBA3AuHw3po5FNeUHECToWP5W7Ypj+M/5M+fn257p87UckpyYoFvk2m7dWrlzcXZcSIEfr888+9PVVuv/12ffvttxo8eLBfZQEAELNevP81vffiEu99N+z+oKfFjKF8vvhLPf6bPypW+BZUzCZvZuLsvn371K9fP3Xv3l0fffSR5syZ4+2nAgAAIld4oFCzHnlTOsqEDhNgFvx5sXZurnqahur6qh/DhJP58+f7WQIAAHHh49nLtT//QETnmoUt7z6/WFfd8QvZjnv9AAAQB3K/2aZgQjDioLLlm9jYBZ6gAgBAHAgmBKu1BDnSUOM3ggoAAHGgfc8TFQ5Ftmo2VBxS+54nKRYQVAAAiANd+nZUqxOOkVNxVfIRpaTW0/lX9lYsIKgAABAHHMfR9Y+MPNqiH8+oKcNUrz77qAAAgCjqdUl3TXh+rIIJAQWC/7GhqnnsmJBylS4dc4Fiha/LkwEAQM0aMLyPuvQ9TW89857ef/FD5X+/V/VTU9TnsrN08fUD1frkVqrT9/qJNu71AwBA/F6/GfoBAADWIqgAAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANZiC30AACwVDof16TufaemcT1Ww94DSmjRUn6G91LlPB+8mhHUBQQUAAAt9+claTbnqUW3ftFPBhKDccFhOIKA3/jBfx516rP7fX27V8adlKt4x9AMAP9HO3O/1xYdfexeWgrwCv8tBHDDfS7f1u0c7/7XLexwqDikcdr23xr/WbdG43nfqn1//S/GOHhUA+JFWf7xGr0ydreVv/12l93dNTE5Q/2F9dOXEITr2xJZ+l4gYZL6X/mfEEwqXhJMjCYfCOlhQqMeuf0aPLL5X8YweFQD4ERa8sFi3nHe3N3+g/E3oiwqLteCFRfpN9wlas3y9rzUiNv39g9XK/WZbpSGlfFgxPXn//CpH8YygAgDVtPbTDXro2qfkhl3vYvGfQsWH/9q948Ip2rt7ny81Inb9be5KBRODEZ0bCAa0bO4qxTOGfgCgmmb9/k0FAo5CVfzFawLMvj0FXs/LL8YNjmp9iL68nfmaP2OhPvy/v6lgT4EaNU9X3yt7q//wPqqfmlKtz7V/7wGp6s6UMk7A0QFzfhyjRwUAqsFMlv1w1jKv1+RoXLmaO31BVOqCf0wYvbL1dfpj1ota87f1ylmbq9Uffa3Hb/yjrjz2v7V83t+r9fkaZaR53z2RCIfCSvfOj18EFQCohl1b9kQUUjyutP2fO2q7JPho0asf639GPqniQ8XeUGApb9qSK28I8O5Lp+rzRV9G/DnPu+LsyL/HJJ37y7MUzwgqAFANCRHOHSgV6VwDxJ6iQ0V6cuxzVZ5jJlqbSbFP3vRchUnXVTmxS1t16HWygglVX6LN/JRzf9FTzVo1UTwjqABANRzTJsObfxAJc6HpeO6ptV4T/PHJ658qb+feo55nelq+W52jr5aui/hzZ704TmlNUysNKyaktGjbXDf9YbTiHUEFAKrB7BD6sxsGeReKozHd90PGXBCVuhB9ny/+qlqrc75Y8lXEn7vF8c315PKpOvOiM7yt8k3zQotzOACbibpPLJ2i9GbxPT/FYNUPAFTTpWMv0PyZC7Vz865K5xKYC9MZ/Tup28DTo14foqOosKhaq3PMHjvV0Tyzme59fYK25+zU395apYK8/V4vS6+fdVfjCHv14kGt9ag88MADOvvss1W/fn01atToiOds2rRJgwcP9s5p3ry5br/9dhUXV+9/JABEW1qTVP1u0WS1PKGF97h870ppV333Qafr7tduUyBAx3W8MkEi0nknZuv7jMymP/rfueT6gbpywhBd9F/961RIqdUelUOHDmno0KHq1auXnnvuh5ONQqGQF1JatGihTz75RFu2bNGvf/1rJSYmasqUKbVVFgDU2FyVZ//xOy19c6XmTpuvnDW5XmA5tddJuvQ3F+i03u3rzN1t66oBv+6jF+79S0TnJiUn6tzLetZ6TfHIcSONgz/SzJkzNX78eO3Zs6fC8Xnz5uniiy9Wbm6ujjnmGO/YtGnTNGHCBO3YsUNJSUkRff78/Hylp6crLy9PaWnxP1YHALDHPZc9pKVvrDjiDsXlh31+PvYi3fD7kVGtzXaRXr9965NcunSpOnXqVBZSjEGDBnmFf/ll5evNCwsLvXPKNwAA/HD7n36jth0zKx3iMyGla7+OGjV1WNRrixe+TabdunVrhZBilD42H6tMdna2Jk+eXOv1AQBwNA3SG+j3H96nl7Nna+60d7V3d0HZx5q2aqwhYy/SL2+5WAmJsbV2xXVdbzn1sjdXqCD/gNKbpeq8y8/W8adlRr2War1yEydO1IMPPljlOV9//bXat2+v2pKVlaVbbrml7LHpUcnMjP4LBwCAkdIwRdc+cJWG3z3U20K/dHVO+54nKhiMvQ3/1q/a6O22a/Z+McvxzZJosxfM/943S536nKqJL4xV8+My7Awqt956q0aOrHqMrV27dhF9LjOJdvny5RWObdu2rexjlUlOTvYaAAA2MRNmO/fpoFi2buU3uqXP3So6VFy2Wqm8rz5Zq7Fn3eHt8ZLR+setYqrVoJKRkeG1mmBWA5klzNu3b/eWJhsLFizwJtR06BDb/6MBAIg1rutqyrDHvJBS2eRgs2+QuVP04795Vve9MTEqddXaZFqzR8pnn33mvTVLkc37pu3bt8/7+MCBA71AcvXVV+vzzz/X/Pnzddddd2nMmDH0mAAAEGWfLVytzeu2VLmCqTSsmA3otkXphpu1FlTuvvtude3aVZMmTfLCiXnftBUrVngfN+N2c+fO9d6a3pXhw4d7+6jce++9tVUSAACoxMezlx+ekxIJR96y7GhIqM39U0yrSps2bfT222/XVgkAACBCBfn7I95p1yzH3rfn3yucahN7OwMAADVs1CDi3ZTN8FBqk4aKBoIKAABQn1/2+sEqn0o5Uu8hPRQNBBUAAKCO57RXmw6tK9xk80gCCQH1HnKmmh0bneXJBBUAACAz7HPnKzcruX5SpWHF3B28WasmGvvkqKjVRVABAACeth2P0xNLp+iUHid6j01gSUgMHg4ujtRtYBc9sWyKmrRorLi5e3Jt4+7JAADUvI3/+Ke3BHl//n6lNUtTn6FnqWXbivfoi8b1O7bukgQAAKKiXec2XvMbQz8AAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLVifsO30o11zQ53AAAgNpRet4+2QX7MB5W9e/d6bzMzM/0uBQAA/IjruNlKP27v9RMOh5Wbm6vU1FTvzo+xmChNyMrJyeFeRTWA17Pm8ZrWLF7PmsXrGbuvp4kfJqS0atVKgUAgfntUzBfXunVrxTrzDcEPWc3h9ax5vKY1i9ezZvF6xubrWVVPSikm0wIAAGsRVAAAgLUIKj5LTk7WpEmTvLf46Xg9ax6vac3i9axZvJ7x/3rG/GRaAAAQv+hRAQAA1iKoAAAAaxFUAACAtQgqAADAWgQVnyxZskSXXHKJtyOf2VH39ddf97ukmJadna0ePXp4OxQ3b95cQ4YM0dq1a/0uK2Y9/fTT6ty5c9mmT7169dK8efP8LituTJ061fu5Hz9+vN+lxKx77rnHew3Lt/bt2/tdVkzbvHmzhg8frqZNmyolJUWdOnXSihUr/C6LoOKXgoICnX766Xrqqaf8LiUuLF68WGPGjNGyZcu0YMECFRUVaeDAgd7rjOozuz2bi+nKlSu9X1T9+vXTpZdeqi+//NLv0mLep59+qunTp3tBED/Naaedpi1btpS1jz76yO+SYtbu3bvVu3dvJSYmen+UfPXVV/rd736nxo0b+11a7G+hH6suvPBCr6FmvPPOOxUez5w50+tZMRfaPn36+FZXrDK9feU98MADXi+LCYLm4oAfZ9++fRo2bJieffZZ3X///X6XE/MSEhLUokULv8uICw8++KB3j58ZM2aUHWvbtq1sQI8K4lJeXp73tkmTJn6XEvNCoZBeeeUVr3fKDAHhxzO9foMHD9aAAQP8LiUurF+/3hs+b9eunRcAN23a5HdJMeuNN95Q9+7dNXToUO+PvK5du3qB2gb0qCDumDtqm7F/043ZsWNHv8uJWV988YUXTA4ePKiGDRtq9uzZ6tChg99lxSwT9latWuUN/eCn69mzp9dzesopp3jDPpMnT9a5556r1atXe3PVUD0bN270ek1vueUW3XHHHd736U033aSkpCSNGDFCfiKoIC7/ajW/rBiv/mnMBeCzzz7zeqdmzZrl/bIyc4EIK9WXk5OjcePGefOn6tWr53c5caH80LmZ72OCS5s2bfSXv/xFo0aN8rW2WP0Dr3v37poyZYr32PSomN+j06ZN8z2oMPSDuHLjjTdq7ty5WrhwoTchFD+e+UvqxBNPVLdu3bxVVWby92OPPeZ3WTHJzJXavn27zjjjDG9ehWkm9D3++OPe+2Z4DT9No0aNdPLJJ2vDhg1+lxKTWrZs+YM/Qk499VQrhtPoUUFcMLesGjt2rDc8sWjRImsmgcXbX1yFhYV+lxGT+vfv7w2llXfNNdd4y2knTJigYDDoW23xNFH5m2++0dVXX+13KTGpd+/eP9jSYd26dV4vld8IKj7+UJVP/t9++63XzW4mfx533HG+1harwz0vvfSS5syZ441Pb9261Tuenp7u7QeA6snKyvK61s334t69e73X1gTA+fPn+11aTDLfk/85X6pBgwbefhXMo/pxbrvtNm91mrmQ5ubmenf8NYHvV7/6ld+lxaSbb75ZZ599tjf0c/nll2v58uV65plnvOY7c/dkRN/ChQvNXat/0EaMGOF3aTHpSK+laTNmzPC7tJh07bXXum3atHGTkpLcjIwMt3///u67777rd1lx5bzzznPHjRvndxkx64orrnBbtmzpfY8ee+yx3uMNGzb4XVZMe/PNN92OHTu6ycnJbvv27d1nnnnGtYFj/uN3WAIAADgSJtMCAABrEVQAAIC1CCoAAMBaBBUAAGAtggoAALAWQQUAAFiLoAIAAKxFUAEAANYiqAAAAGsRVAAAgLUIKgAAwFoEFQAAIFv9f5YgPFfaXdfzAAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 640x480 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from matplotlib import pyplot as plt\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "plt.scatter(data[0][:, 0], data[0][:, 1], c=data[1], s=60)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d58ae8f",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "\n",
    "\n",
    "首先，我们实现欧式距离的计算函数： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "54c83c6d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "def euclidean_distance(a, b):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    a -- 数组 a\n",
    "    b -- 数组 b\n",
    "\n",
    "    返回:\n",
    "    dist -- a, b 间欧式距离\n",
    "    \"\"\"\n",
    "    # 欧式距离\n",
    "    x = float(a[0]) - float(b[0])\n",
    "    x = x * x\n",
    "    y = float(a[1]) - float(b[1])\n",
    "    y = y * y\n",
    "    dist = round(np.sqrt(x + y), 2)\n",
    "    return dist"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ecd0866",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "然后，实现 Agglomerative 聚类函数，这里使用中心连接的方法。为了更加具体的展示层次聚类的过程，这里在函数中添加一些多余的 ` print()  ` 函数。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "fbad26ca",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "def agglomerative_clustering(data):\n",
    "    # Agglomerative 聚类计算过程\n",
    "\n",
    "    while len(data) > 1:\n",
    "        print(\"☞ 第 {} 次迭代\\n\".format(10 - len(data) + 1))\n",
    "        min_distance = float(\"inf\")  # 设定初始距离为无穷大\n",
    "        for i in range(len(data)):\n",
    "            print(\"---\")\n",
    "            for j in range(i + 1, len(data)):\n",
    "                distance = euclidean_distance(data[i], data[j])\n",
    "                print(\"计算 {} 与 {} 距离为 {}\".format(data[i], data[j], distance))\n",
    "                if distance < min_distance:\n",
    "                    min_distance = distance\n",
    "                    min_ij = (i, j)\n",
    "        i, j = min_ij  # 最近数据点序号\n",
    "        data1 = data[i]\n",
    "        data2 = data[j]\n",
    "        data = np.delete(data, j, 0)  # 删除原数据\n",
    "        data = np.delete(data, i, 0)  # 删除原数据\n",
    "        b = np.atleast_2d(\n",
    "            [(data1[0] + data2[0]) / 2, (data1[1] + data2[1]) / 2]\n",
    "        )  # 计算两点新中心\n",
    "        data = np.concatenate((data, b), axis=0)  # 将新数据点添加到迭代过程\n",
    "        print(\"\\n最近距离:{} & {} = {}, 合并后中心:{}\\n\".format(data1, data2, min_distance, b))\n",
    "\n",
    "    return data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "c55577d8",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "☞ 第 1 次迭代\n",
      "\n",
      "---\n",
      "计算 [  6.04774884 -10.30504657] 与 [2.90159483 5.42121526] 距离为 16.04\n",
      "计算 [  6.04774884 -10.30504657] 与 [4.1575017  3.89627276] 距离为 14.33\n",
      "计算 [  6.04774884 -10.30504657] 与 [1.53636249 5.11121453] 距离为 16.06\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 3.88101257 -9.59334486] 距离为 2.28\n",
      "计算 [  6.04774884 -10.30504657] 与 [1.70789903 6.00435173] 距离为 16.88\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 5.69192445 -9.47641249] 距离为 0.9\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 5.4307043  -9.75956122] 距离为 0.82\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 5.85943906 -8.38192364] 距离为 1.93\n",
      "计算 [  6.04774884 -10.30504657] 与 [0.69523642 3.23270535] 距离为 14.56\n",
      "---\n",
      "计算 [2.90159483 5.42121526] 与 [4.1575017  3.89627276] 距离为 1.98\n",
      "计算 [2.90159483 5.42121526] 与 [1.53636249 5.11121453] 距离为 1.4\n",
      "计算 [2.90159483 5.42121526] 与 [ 3.88101257 -9.59334486] 距离为 15.05\n",
      "计算 [2.90159483 5.42121526] 与 [1.70789903 6.00435173] 距离为 1.33\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.69192445 -9.47641249] 距离为 15.16\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.4307043  -9.75956122] 距离为 15.39\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.85943906 -8.38192364] 距离为 14.12\n",
      "计算 [2.90159483 5.42121526] 与 [0.69523642 3.23270535] 距离为 3.11\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [1.53636249 5.11121453] 距离为 2.89\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [1.70789903 6.00435173] 距离为 3.23\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.69192445 -9.47641249] 距离为 13.46\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.4307043  -9.75956122] 距离为 13.72\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.85943906 -8.38192364] 距离为 12.4\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "---\n",
      "计算 [1.53636249 5.11121453] 与 [ 3.88101257 -9.59334486] 距离为 14.89\n",
      "计算 [1.53636249 5.11121453] 与 [1.70789903 6.00435173] 距离为 0.91\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.69192445 -9.47641249] 距离为 15.17\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.4307043  -9.75956122] 距离为 15.37\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.85943906 -8.38192364] 距离为 14.17\n",
      "计算 [1.53636249 5.11121453] 与 [0.69523642 3.23270535] 距离为 2.06\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [1.70789903 6.00435173] 距离为 15.75\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.69192445 -9.47641249] 距离为 1.81\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.4307043  -9.75956122] 距离为 1.56\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.85943906 -8.38192364] 距离为 2.32\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "---\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.69192445 -9.47641249] 距离为 15.99\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.4307043  -9.75956122] 距离为 16.2\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.85943906 -8.38192364] 距离为 14.97\n",
      "计算 [1.70789903 6.00435173] 与 [0.69523642 3.23270535] 距离为 2.95\n",
      "---\n",
      "计算 [ 5.69192445 -9.47641249] 与 [ 5.4307043  -9.75956122] 距离为 0.39\n",
      "计算 [ 5.69192445 -9.47641249] 与 [ 5.85943906 -8.38192364] 距离为 1.11\n",
      "计算 [ 5.69192445 -9.47641249] 与 [0.69523642 3.23270535] 距离为 13.66\n",
      "---\n",
      "计算 [ 5.4307043  -9.75956122] 与 [ 5.85943906 -8.38192364] 距离为 1.44\n",
      "计算 [ 5.4307043  -9.75956122] 与 [0.69523642 3.23270535] 距离为 13.83\n",
      "---\n",
      "计算 [ 5.85943906 -8.38192364] 与 [0.69523642 3.23270535] 距离为 12.71\n",
      "---\n",
      "\n",
      "最近距离:[ 5.69192445 -9.47641249] & [ 5.4307043  -9.75956122] = 0.39, 合并后中心:[[ 5.56131437 -9.61798686]]\n",
      "\n",
      "☞ 第 2 次迭代\n",
      "\n",
      "---\n",
      "计算 [  6.04774884 -10.30504657] 与 [2.90159483 5.42121526] 距离为 16.04\n",
      "计算 [  6.04774884 -10.30504657] 与 [4.1575017  3.89627276] 距离为 14.33\n",
      "计算 [  6.04774884 -10.30504657] 与 [1.53636249 5.11121453] 距离为 16.06\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 3.88101257 -9.59334486] 距离为 2.28\n",
      "计算 [  6.04774884 -10.30504657] 与 [1.70789903 6.00435173] 距离为 16.88\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 5.85943906 -8.38192364] 距离为 1.93\n",
      "计算 [  6.04774884 -10.30504657] 与 [0.69523642 3.23270535] 距离为 14.56\n",
      "计算 [  6.04774884 -10.30504657] 与 [ 5.56131437 -9.61798686] 距离为 0.84\n",
      "---\n",
      "计算 [2.90159483 5.42121526] 与 [4.1575017  3.89627276] 距离为 1.98\n",
      "计算 [2.90159483 5.42121526] 与 [1.53636249 5.11121453] 距离为 1.4\n",
      "计算 [2.90159483 5.42121526] 与 [ 3.88101257 -9.59334486] 距离为 15.05\n",
      "计算 [2.90159483 5.42121526] 与 [1.70789903 6.00435173] 距离为 1.33\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.85943906 -8.38192364] 距离为 14.12\n",
      "计算 [2.90159483 5.42121526] 与 [0.69523642 3.23270535] 距离为 3.11\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.56131437 -9.61798686] 距离为 15.27\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [1.53636249 5.11121453] 距离为 2.89\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [1.70789903 6.00435173] 距离为 3.23\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.85943906 -8.38192364] 距离为 12.4\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.56131437 -9.61798686] 距离为 13.59\n",
      "---\n",
      "计算 [1.53636249 5.11121453] 与 [ 3.88101257 -9.59334486] 距离为 14.89\n",
      "计算 [1.53636249 5.11121453] 与 [1.70789903 6.00435173] 距离为 0.91\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.85943906 -8.38192364] 距离为 14.17\n",
      "计算 [1.53636249 5.11121453] 与 [0.69523642 3.23270535] 距离为 2.06\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.56131437 -9.61798686] 距离为 15.27\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [1.70789903 6.00435173] 距离为 15.75\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.85943906 -8.38192364] 距离为 2.32\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.56131437 -9.61798686] 距离为 1.68\n",
      "---\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.85943906 -8.38192364] 距离为 14.97\n",
      "计算 [1.70789903 6.00435173] 与 [0.69523642 3.23270535] 距离为 2.95\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.56131437 -9.61798686] 距离为 16.09\n",
      "---\n",
      "计算 [ 5.85943906 -8.38192364] 与 [0.69523642 3.23270535] 距离为 12.71\n",
      "计算 [ 5.85943906 -8.38192364] 与 [ 5.56131437 -9.61798686] 距离为 1.27\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [ 5.56131437 -9.61798686] 距离为 13.74\n",
      "---\n",
      "\n",
      "最近距离:[  6.04774884 -10.30504657] & [ 5.56131437 -9.61798686] = 0.84, 合并后中心:[[ 5.80453161 -9.96151671]]\n",
      "\n",
      "☞ 第 3 次迭代\n",
      "\n",
      "---\n",
      "计算 [2.90159483 5.42121526] 与 [4.1575017  3.89627276] 距离为 1.98\n",
      "计算 [2.90159483 5.42121526] 与 [1.53636249 5.11121453] 距离为 1.4\n",
      "计算 [2.90159483 5.42121526] 与 [ 3.88101257 -9.59334486] 距离为 15.05\n",
      "计算 [2.90159483 5.42121526] 与 [1.70789903 6.00435173] 距离为 1.33\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.85943906 -8.38192364] 距离为 14.12\n",
      "计算 [2.90159483 5.42121526] 与 [0.69523642 3.23270535] 距离为 3.11\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.80453161 -9.96151671] 距离为 15.65\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [1.53636249 5.11121453] 距离为 2.89\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [1.70789903 6.00435173] 距离为 3.23\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.85943906 -8.38192364] 距离为 12.4\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.80453161 -9.96151671] 距离为 13.96\n",
      "---\n",
      "计算 [1.53636249 5.11121453] 与 [ 3.88101257 -9.59334486] 距离为 14.89\n",
      "计算 [1.53636249 5.11121453] 与 [1.70789903 6.00435173] 距离为 0.91\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.85943906 -8.38192364] 距离为 14.17\n",
      "计算 [1.53636249 5.11121453] 与 [0.69523642 3.23270535] 距离为 2.06\n",
      "计算 [1.53636249 5.11121453] 与 [ 5.80453161 -9.96151671] 距离为 15.67\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [1.70789903 6.00435173] 距离为 15.75\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.85943906 -8.38192364] 距离为 2.32\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.80453161 -9.96151671] 距离为 1.96\n",
      "---\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.85943906 -8.38192364] 距离为 14.97\n",
      "计算 [1.70789903 6.00435173] 与 [0.69523642 3.23270535] 距离为 2.95\n",
      "计算 [1.70789903 6.00435173] 与 [ 5.80453161 -9.96151671] 距离为 16.48\n",
      "---\n",
      "计算 [ 5.85943906 -8.38192364] 与 [0.69523642 3.23270535] 距离为 12.71\n",
      "计算 [ 5.85943906 -8.38192364] 与 [ 5.80453161 -9.96151671] 距离为 1.58\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [ 5.80453161 -9.96151671] 距离为 14.15\n",
      "---\n",
      "\n",
      "最近距离:[1.53636249 5.11121453] & [1.70789903 6.00435173] = 0.91, 合并后中心:[[1.62213076 5.55778313]]\n",
      "\n",
      "☞ 第 4 次迭代\n",
      "\n",
      "---\n",
      "计算 [2.90159483 5.42121526] 与 [4.1575017  3.89627276] 距离为 1.98\n",
      "计算 [2.90159483 5.42121526] 与 [ 3.88101257 -9.59334486] 距离为 15.05\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.85943906 -8.38192364] 距离为 14.12\n",
      "计算 [2.90159483 5.42121526] 与 [0.69523642 3.23270535] 距离为 3.11\n",
      "计算 [2.90159483 5.42121526] 与 [ 5.80453161 -9.96151671] 距离为 15.65\n",
      "计算 [2.90159483 5.42121526] 与 [1.62213076 5.55778313] 距离为 1.29\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.85943906 -8.38192364] 距离为 12.4\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.80453161 -9.96151671] 距离为 13.96\n",
      "计算 [4.1575017  3.89627276] 与 [1.62213076 5.55778313] 距离为 3.03\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.85943906 -8.38192364] 距离为 2.32\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.80453161 -9.96151671] 距离为 1.96\n",
      "计算 [ 3.88101257 -9.59334486] 与 [1.62213076 5.55778313] 距离为 15.32\n",
      "---\n",
      "计算 [ 5.85943906 -8.38192364] 与 [0.69523642 3.23270535] 距离为 12.71\n",
      "计算 [ 5.85943906 -8.38192364] 与 [ 5.80453161 -9.96151671] 距离为 1.58\n",
      "计算 [ 5.85943906 -8.38192364] 与 [1.62213076 5.55778313] 距离为 14.57\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [ 5.80453161 -9.96151671] 距离为 14.15\n",
      "计算 [0.69523642 3.23270535] 与 [1.62213076 5.55778313] 距离为 2.5\n",
      "---\n",
      "计算 [ 5.80453161 -9.96151671] 与 [1.62213076 5.55778313] 距离为 16.07\n",
      "---\n",
      "\n",
      "最近距离:[2.90159483 5.42121526] & [1.62213076 5.55778313] = 1.29, 合并后中心:[[2.26186279 5.4894992 ]]\n",
      "\n",
      "☞ 第 5 次迭代\n",
      "\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.85943906 -8.38192364] 距离为 12.4\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.80453161 -9.96151671] 距离为 13.96\n",
      "计算 [4.1575017  3.89627276] 与 [2.26186279 5.4894992 ] 距离为 2.48\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.85943906 -8.38192364] 距离为 2.32\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.80453161 -9.96151671] 距离为 1.96\n",
      "计算 [ 3.88101257 -9.59334486] 与 [2.26186279 5.4894992 ] 距离为 15.17\n",
      "---\n",
      "计算 [ 5.85943906 -8.38192364] 与 [0.69523642 3.23270535] 距离为 12.71\n",
      "计算 [ 5.85943906 -8.38192364] 与 [ 5.80453161 -9.96151671] 距离为 1.58\n",
      "计算 [ 5.85943906 -8.38192364] 与 [2.26186279 5.4894992 ] 距离为 14.33\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [ 5.80453161 -9.96151671] 距离为 14.15\n",
      "计算 [0.69523642 3.23270535] 与 [2.26186279 5.4894992 ] 距离为 2.75\n",
      "---\n",
      "计算 [ 5.80453161 -9.96151671] 与 [2.26186279 5.4894992 ] 距离为 15.85\n",
      "---\n",
      "\n",
      "最近距离:[ 5.85943906 -8.38192364] & [ 5.80453161 -9.96151671] = 1.58, 合并后中心:[[ 5.83198533 -9.17172018]]\n",
      "\n",
      "☞ 第 6 次迭代\n",
      "\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [ 3.88101257 -9.59334486] 距离为 13.49\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [2.26186279 5.4894992 ] 距离为 2.48\n",
      "计算 [4.1575017  3.89627276] 与 [ 5.83198533 -9.17172018] 距离为 13.17\n",
      "---\n",
      "计算 [ 3.88101257 -9.59334486] 与 [0.69523642 3.23270535] 距离为 13.22\n",
      "计算 [ 3.88101257 -9.59334486] 与 [2.26186279 5.4894992 ] 距离为 15.17\n",
      "计算 [ 3.88101257 -9.59334486] 与 [ 5.83198533 -9.17172018] 距离为 2.0\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [2.26186279 5.4894992 ] 距离为 2.75\n",
      "计算 [0.69523642 3.23270535] 与 [ 5.83198533 -9.17172018] 距离为 13.43\n",
      "---\n",
      "计算 [2.26186279 5.4894992 ] 与 [ 5.83198533 -9.17172018] 距离为 15.09\n",
      "---\n",
      "\n",
      "最近距离:[ 3.88101257 -9.59334486] & [ 5.83198533 -9.17172018] = 2.0, 合并后中心:[[ 4.85649895 -9.38253252]]\n",
      "\n",
      "☞ 第 7 次迭代\n",
      "\n",
      "---\n",
      "计算 [4.1575017  3.89627276] 与 [0.69523642 3.23270535] 距离为 3.53\n",
      "计算 [4.1575017  3.89627276] 与 [2.26186279 5.4894992 ] 距离为 2.48\n",
      "计算 [4.1575017  3.89627276] 与 [ 4.85649895 -9.38253252] 距离为 13.3\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [2.26186279 5.4894992 ] 距离为 2.75\n",
      "计算 [0.69523642 3.23270535] 与 [ 4.85649895 -9.38253252] 距离为 13.28\n",
      "---\n",
      "计算 [2.26186279 5.4894992 ] 与 [ 4.85649895 -9.38253252] 距离为 15.1\n",
      "---\n",
      "\n",
      "最近距离:[4.1575017  3.89627276] & [2.26186279 5.4894992 ] = 2.48, 合并后中心:[[3.20968225 4.69288598]]\n",
      "\n",
      "☞ 第 8 次迭代\n",
      "\n",
      "---\n",
      "计算 [0.69523642 3.23270535] 与 [ 4.85649895 -9.38253252] 距离为 13.28\n",
      "计算 [0.69523642 3.23270535] 与 [3.20968225 4.69288598] 距离为 2.91\n",
      "---\n",
      "计算 [ 4.85649895 -9.38253252] 与 [3.20968225 4.69288598] 距离为 14.17\n",
      "---\n",
      "\n",
      "最近距离:[0.69523642 3.23270535] & [3.20968225 4.69288598] = 2.91, 合并后中心:[[1.95245933 3.96279567]]\n",
      "\n",
      "☞ 第 9 次迭代\n",
      "\n",
      "---\n",
      "计算 [ 4.85649895 -9.38253252] 与 [1.95245933 3.96279567] 距离为 13.66\n",
      "---\n",
      "\n",
      "最近距离:[ 4.85649895 -9.38253252] & [1.95245933 3.96279567] = 13.66, 合并后中心:[[ 3.40447914 -2.70986843]]\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[ 3.40447914, -2.70986843]])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agglomerative_clustering(data[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f090134e",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "通过上面的计算过程，你应该能很清晰地看出 Agglomerative 聚类的完整过程了。我们将 ` data  ` 数组的每行依次按 ` 0-9  ` 编号，并将计算过程绘制成层次聚类的二叉树结构如下： \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747388.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747388.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747388.png)\n",
    "\n",
    "建立好树形结构后，如果我们想取类别为 2，就从顶部画一条横线就可以了。然后，沿着网络延伸到叶节点就能找到各自对应的类别。如下图所示： \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747598.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747598.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747598.png)\n",
    "\n",
    "如果你对着 ` data  ` 预设的类别，你会发现最终的聚类结果和 ` data[1]  =  [0,  1,  1,  1,  0,  1,  0,  0,  0,  1]  ` 完全一致。 \n",
    "\n",
    "至此，我们就完整实现了自底向上层次聚类法。 \n",
    "\n",
    "##  26.11.  使用 scikit-learn 完成 Agglomerative 聚类  # \n",
    "\n",
    "scikit-learn 中也提供了 Agglomerative 聚类的类，相应的参数解释如下： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "665119ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "sklearn.cluster.AgglomerativeClustering(n_clusters=2, metric='euclidean', memory=None, connectivity=None, compute_full_tree='auto', linkage='ward', pooling_func=<function mean>)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a3110d2",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "其中： \n",
    "\n",
    "  * ` n_clusters  ` : 表示最终要查找类别的数量，例如上面的 2 类。 \n",
    "\n",
    "  * ` metric  ` : 有 ` euclidean  ` （欧式距离）, ` l1  ` （L1 范数）, ` l2  ` （L2 范数）, ` manhattan  ` （曼哈顿距离）等可选。 \n",
    "\n",
    "  * ` linkage  ` : 连接方法： ` ward  ` （单连接）, ` complete  ` （全连接）, ` average  ` （平均连接）可选。 \n",
    "\n",
    "实验同样使用上面的数据集完成模型构建并聚类： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "e1bf33ae",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([1, 0, 0, 0, 1, 0, 1, 1, 1, 0])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.cluster import AgglomerativeClustering\n",
    "\n",
    "model = AgglomerativeClustering(n_clusters=2, metric=\"euclidean\", linkage=\"average\")\n",
    "model.fit_predict(data[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c55fc60",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "可以看到，最终的聚类结构和我们上面是一致的。（表示类别的 ` 1,  0  ` 相反没有影响） \n",
    "\n",
    "##  26.12.  自顶向下层次聚类法  # \n",
    "\n",
    "除了上面讲到的自底向上的层次聚类法，还有一类是自顶向下层次聚类法，这种方法的计算流程与前一种正好相反，但过程要复杂很多。大致说来，我们首先将全部数据归为一类，然后逐步分割成小类，而这里的分割方法又有常见的 2 种形式： \n",
    "\n",
    "##  26.13.  利用 K-Means 算法进行分割  # \n",
    "\n",
    "首先，我们说一说利用 K-Means 算法分割方法： \n",
    "\n",
    "  1. 把数据集  $D$  归为单个类别  $C$  作为顶层。 \n",
    "\n",
    "  2. 使用 K-Means 算法把  $C$  划分成 2 个子类别，构成子层； \n",
    "\n",
    "  3. 可递归使用 K-Means 算法继续划分子层到终止条件。 \n",
    "\n",
    "同样，我们可以通过示意图来演示该聚类算法的流程。 \n",
    "\n",
    "首先，全部数据在一个类别中： \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805747889.png)\n",
    "\n",
    "然后，通过 K-Means 算法把其聚成 2 类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805748081.png)\n",
    "\n",
    "紧接着，将子类再分别使用 K-Means 算法聚成 2 类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805748320.png)\n",
    "\n",
    "最终，直到所以的数据都各自为 1 个类别，即分割完成。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805748589.png)\n",
    "\n",
    "利用 K-Means 算法进行自顶向下层次聚类过程同样属于看起来简单，但计算量庞大的过程。 \n",
    "\n",
    "##  26.14.  利用平均距离进行分割  # \n",
    "\n",
    "  1. 把数据集  $D$  归为单个类别  $C$  作为顶层。 \n",
    "\n",
    "  2. 从类别  $C$  中取出点  $d$  ，使得  $d$  满足到  $C$  中其他点的平均距离最远，构成类别  $N$  。 \n",
    "\n",
    "  3. 继续从类别  $C$  中取出点  $d'$  ， 使得  $d'$  满足到  $C$  中其他点的平均距离与到  $N$  中点的平均距离之间的差值最大，并将点放入  $N$  。 \n",
    "\n",
    "  4. 重复步骤 3，直到差值为负数。 \n",
    "\n",
    "  5. 再从子类中重复步骤 2，3，4 直到全部点单独成类，即完成分割。 \n",
    "\n",
    "同样，我们可以通过示意图来演示该聚类算法的流程。 \n",
    "\n",
    "首先，全部数据在一个类别中： \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805748791.png)\n",
    "\n",
    "然后，我们依次抽取 1 个数据点，并计算它与其他点的平均距离，且最终取平均距离最大的点单独成类。例如这里计算出结果为 5。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749016.png)\n",
    "\n",
    "同样，从剩下的 4 个点再取出一个点，使该点到剩下点的平均距离与该点到点 5 的距离差值最大且不为负数。这里没有点满足条件，终止。 \n",
    "\n",
    "接下来，从剩下的 4 个点中再取出一个点，并计算它与其他 3 点的距离，取最大单独成类。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749241.png)\n",
    "\n",
    "同样，从剩下的 3 个点中再取出一个点，使该点到剩下点的平均距离与该点到点 4 的距离差值最大且不为负数，合并为 1 类。点 3 明显满足： \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749424.png)\n",
    "\n",
    "重复步骤，继续计算形成子层。然后对子层中包含有对应元素的类重复上面的步骤聚类，直到全部点单独成类，即完成分割。 \n",
    "\n",
    "自顶向下层次聚类法在实施过程中常常遇到一个问题，那就是如果两个样本在上一步聚类中被划分成不同的类别，那么即使这两个点距离非常近，后面也不会被放到一类中。 \n",
    "\n",
    "所以在实际应用中，自顶向下层次聚类法没有自底而上的层次聚类法常用，这里也就不再进行实现了，了解其运行原理即可。 \n",
    "\n",
    "##  26.15.  BIRCH 聚类算法  # \n",
    "\n",
    "除了上文提到的两种层次聚类方法，还有一种非常常用且高效的层次聚类法，叫做 BIRCH。 \n",
    "\n",
    "[ BIRCH ](https://doi.org/10.1145%2F233269.233324) 的全称为 Balanced Iterative Reducing and Clustering using Hierarchies，直译过来就是「使用层次方法的平衡迭代规约和聚类」，该算法由时任 IBM 工程师 Tian Zhang 于 1996 年发明。 \n",
    "\n",
    "BIRCH 最大的特点就是高效，可用于大型数据集的快速聚类。 \n",
    "\n",
    "##  26.16.  CF 聚类特征  # \n",
    "\n",
    "BIRCH 的聚类过程主要是涉及到 CF 聚类特征和 CF Tree 聚类特征树的概念。所以，我们需要先了解什么是聚类特征。 \n",
    "\n",
    "一组样本的 CF 聚类特征定义为如下所示的三元组： \n",
    "\n",
    "$$CF = \\langle ( N, LS, SS ) \\rangle $$ \n",
    "\n",
    "其中，  $N$  表示该 CF 中拥有的样本点的数量；  $LS$  表示该 CF 中拥有的样本点各特征维度的和向量；  $SS$  表示该 CF 中拥有的样本点各特征维度的平方和。 \n",
    "\n",
    "例如，我们有 5 个样本，分别为：  $(1,3), (2,5), (1,3), (7,9), (8,8)$  ，那么： \n",
    "\n",
    "  * $N = 5$ \n",
    "\n",
    "  * $LS = (1+2+1+7+8, 3+5+3+9+8) =(19, 28)$ \n",
    "\n",
    "  * $SS = (1^2+2^2+1^2+7^2+8^2+3^2+5^2+3^2+9^2+8^2) = (307)$ \n",
    "\n",
    "于是，对应的 CF 值就为： \n",
    "\n",
    "$$ CF = \\langle 5, (19,28), (307) \\rangle $$ \n",
    "\n",
    "CF 拥有可加性，例如当  $CF'= \\langle 3, (35, 36), 857 \\rangle$  时： \n",
    "\n",
    "$$ CF' + CF = \\langle 5, (19,28), (307) \\rangle + \\langle 3, (12, 26), 87 \\rangle = \\langle 8, (31, 54), (394) \\rangle $$ \n",
    "\n",
    "CF 聚类特征本质上是定义类别（簇）的信息，并有效地对数据进行压缩。 \n",
    "\n",
    "##  26.17.  CF 聚类特征树  # \n",
    "\n",
    "接下来，我们介绍第二个概念 CF 聚类特征树。 \n",
    "\n",
    "CF 树由根节点（root node）、枝节点（branch node）和叶节点（leaf node）构成。另包含有三个参数，分别为：枝平衡因子  $\\beta$  、叶平衡因子  $\\lambda$  和空间阈值  $\\tau$  。而非叶节点（nonleaf node）中包含不多于  $\\beta$  个  $[CF,child_{i}]$  的元项。 \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749631.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749631.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749631.png)\n",
    "\n",
    "BIRCH 算法的核心就是基于训练样本建立了 CF 聚类特征树。CF 聚类特征树对应的输出就是若干个 CF 节点，每个节点里的样本点就是一个聚类的类别。 \n",
    "\n",
    "其实，关于 CF 聚类特征树的特点以及树的生成过程还有很多内容可以深入学习，不过这里面涉及到大量的数学理论和推导过程，不太好理解，这里就不再展开了。有兴趣的同学可以阅读 [ 原论文 ](http://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf) 。 \n",
    "\n",
    "最后，我们简单说一下 BIRCH 算法相比 Agglomerative 算法的优势，也就是总结学习 BIRCH 算法的必要性： \n",
    "\n",
    "  1. BIRCH 算法在建立 CF 特征树时只存储原始数据的特征信息，并不需要存储原始数据信息，内存开销上更优，计算高效。 \n",
    "\n",
    "  2. BIRCH 算法只需要遍历一遍原始数据，而 Agglomerative 算法在每次迭代都需要遍历一遍数据，再次突出 BIRCH 的高效性。 \n",
    "\n",
    "  3. BIRCH 属于在线学习算法，并支持对流数据的聚类，开始聚类时并不需要知道所有的数据。 \n",
    "\n",
    "##  26.18.  BIRCH 聚类实现  # \n",
    "\n",
    "上面说了这么多，总结就是 BIRCH 属于层次聚类算法中非常高效的那一种方法。下面，就来看一看如何调用 scikit-learn 提供的 BIRCH 类完成聚类任务。 \n",
    "\n",
    "本次实验中，我们将使用手写字符数据集 DIGITS。该数据集的全称为 [ Pen-Based Recognition of Handwritten Digits Data Set ](http://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits) ，来源于 UCI 开放数据集网站。数据集包含由 1797 张数字 0 到 9 的手写字符影像转换后的数字矩阵，目标值是 0-9。由于本次实验是完成聚类任务，实际上我们是想通过其特征来观察相同含义字符是否有空间上的联系性。 \n",
    "\n",
    "该数据集可以直接使用 scikit-learn 提供的 ` load_digits  ` 方法进行调用，其中包含 3 个属性： \n",
    "\n",
    "属性  |  描述   \n",
    "---|---  \n",
    "` images  ` |  8x8 矩阵，记录每张手写字符图像对应的像素灰度值   \n",
    "` data  ` |  将 images 对应的 8x8 矩阵转换为行向量   \n",
    "` target  ` |  记录 1797 张影像各自代表的数字   \n",
    "  \n",
    "接下来，我们可视化前 5 个字符图像： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "4d396463",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAA8gAAADMCAYAAACvK4qJAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjEsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvc2/+5QAAAAlwSFlzAAAPYQAAD2EBqD+naQAAFENJREFUeJzt3V+MlOXZB+BnYcuiBnZX8Q8EwrJJG4soW8ASY9tIutbYmkAPwLQ0AdKAMdgW9WDdE13TpMiJxaQGqImuB1WwB2jTtDRqstg2EuuSbdo0sf5Z4lqLaAu7QFow7H6Zmcj3UYsy+z0z88w815W8gZns3POw7+/mnXtm3pmmiYmJiQAAAACZm1LrBQAAAEAKDMgAAABgQAYAAIASAzIAAAAYkAEAAKDEgAwAAAAGZAAAACgxIAMAAEAIobnadzg+Ph7efffdMGPGjNDU1FTtuydzExMT4fjx42HOnDlhypTqPz8k/9SS/JM7PUDO5J/cTVxgD1R9QC40xrx586p9t3COkZGRMHfu3Krfr/yTAvknd3qAnMk/uRv5lB6o+oBceNboo4XNnDkzpGjv3r1R6z3wwANR661YsSLE1tfXF7Vee3t7SNHY2FjxP+ePclht9ZD/2L7+9a9HrTc6Ohpi6+3tjVrvtttuCymS/+r77W9/G7Xet7/97RDbtddeG7Xer371q5AqPfDpfvzjHyf9+GL+/Pkhtv3790et5zFQ/eY/tmPHjkWtd+edd4bYnn766ZCLsQvsgaoPyB+9paLQGKk2x8UXXxy1Xuy3sUybNi3EFntfpLpvP1Krt/bUQ/5ja26O+9/M1KlTQ+o9n/q+lf/queSSS5Lfd7F7tB72rR44v+nTp4eUVeKtwR4DVfd+U85/Jd5WHtNnPvOZEFsu+6KcHvAhXQAAAGBABgAAgBIDMgAAAEx2QH700UdDR0dH8TyV5cuXh1deeSX+yiBR8k/u9AA5k39ypwdodGUPyHv27An33HNP8ZOZDx48GBYvXhxuueWWcOTIkcqsEBIi/+ROD5Az+Sd3eoAclD0gP/zww2Hjxo1hw4YNYeHChWHnzp3FT4B9/PHHK7NCSIj8kzs9QM7kn9zpAXJQ1oB8+vTpMDg4GLq7u/+3wJQpxcsvv/zyf73NqVOnit859X83qEfyT+7K7QH5p5E4BpA7xwByUdaA/MEHH4QzZ86EK6+88pzrC5cPHz78X2+zdevW0NraenYrfDkz1CP5J3fl9oD800gcA8idYwC5qPinWPf29obR0dGz28jISKXvEpIh/+RM/smdHiBn8k+9ai7nh2fNmhWmTp0a3nvvvXOuL1y+6qqr/uttWlpaihvUO/knd+X2gPzTSBwDyJ1jALko6xXkadOmhaVLl4YXX3zx7HXj4+PFyzfccEMl1gfJkH9ypwfImfyTOz1ALsp6Bbmg8NHu69atC8uWLQtf/OIXw/bt28PJkyeLn2YHjU7+yZ0eIGfyT+70ADkoe0C+/fbbw/vvvx/uv//+4gn5XV1dYd++fR87YR8akfyTOz1AzuSf3OkBclD2gFxw1113FTfIkfyTOz1AzuSf3OkBGl3FP8UaAAAA6oEBGQAAACb7FutG19PTE7Xe8PBw1HpHjx4NsV166aVR6z3zzDNR661evTpqPaqnra0tar39+/eH2AYGBqLWW7VqVdR6VM/Q0FDUeitWrIhar7W1NcR26NCh6DWpnvvuuy/p4/euXbui1rvjjjtCbIODg1HrdXd3R61H/erv749ar3DON5XnFWQAAAAwIAMAAECJARkAAAAMyAAAAFBiQAYAAAADMgAAAJQYkAEAAMCADAAAACUGZAAAADAgAwAAQIkBGQAAAAzIAAAAUGJABgAAAAMyAAAAlBiQAQAAwIAMAAAAJQZkAAAAMCADAABAiQEZAAAAQgjNoQEMDg5GrTc8PBy13ptvvhm1XmdnZ4jt5ptvTnqfrF69Omo9zm9oaChqvYGBgZC6rq6uWi+BRDz77LNR6y1evDhqvVWrVoXYHnzwweg1qZ5NmzZFrdfT0xO13tKlS6PWW7BgQYitu7s7ek3q07Fjx6LW6+/vj1pvy5YtIbZDhw6FlHV0dFT9Pr2CDAAAAAZkAAAAKDEgAwAAgAEZAAAASgzIAAAAYEAGAACASQzIW7duDddff32YMWNGuOKKK4pfN/Haa6+VUwLqmh4gZ/JPzuSf3OkBclHWgLx///6wefPmcODAgfD888+HDz/8MHzta18LJ0+erNwKISF6gJzJPzmTf3KnB8hFczk/vG/fvo99+XXhGaTBwcHwla98JfbaIDl6gJzJPzmTf3KnB8hFWQPyfxodHS3+eemll573Z06dOlXcPjI2Nvb/uUtIyqf1gPzTyOSfnHkMRO4cA2hUk/6QrvHx8bBly5Zw4403hkWLFn3i+Qqtra1nt3nz5k32LiEpF9ID8k+jkn9y5jEQuXMMoJFNekAunIPw5z//OezevfsTf663t7f4DNNH28jIyGTvEpJyIT0g/zQq+SdnHgORO8cAGtmk3mJ91113hV/+8pfhpZdeCnPnzv3En21paSlu0EgutAfkn0Yk/+TMYyBy5xhAoytrQJ6YmAjf+973wt69e8PAwEBYsGBB5VYGCdID5Ez+yZn8kzs9QC6ay307xVNPPRWee+654negHT58uHh94byCiy66qFJrhGToAXIm/+RM/smdHiAXZZ2DvGPHjuI5BDfddFOYPXv22W3Pnj2VWyEkRA+QM/knZ/JP7vQAuSj7LdaQMz1AzuSfnMk/udMD5GLSn2INAAAAjcSADAAAAJP9mqfUHD16NGq9JUuWRK3X2dkZUrd06dJaL4FJ2r59e9R6fX19UesVzldKXeF8KijYsmVL1HodHR1Jr69g5cqV0WtSPbEfY7z11ltR6w0PD0et193dHVJ/HNne3h61HtXT398ftd6hQ4ei1lu/fn2ILfZxpa2tLenHpRfCK8gAAABgQAYAAIASAzIAAAAYkAEAAKDEgAwAAAAGZAAAACgxIAMAAIABGQAAAEoMyAAAAGBABgAAgBIDMgAAABiQAQAAoMSADAAAAAZkAAAAKDEgAwAAgAEZAAAASgzIAAAAYEAGAACAEgMyAAAAhBCaQwM4evRo1Ho333xzyE3s32F7e3vUepzfli1botZbv359dlk4duxYrZdAIvtu+/btUes9++yzIXX9/f21XgIJ6ezsjFrvn//8Z9R63d3dUetVouYLL7yQ3XG0VmL/H3v33XdHrbdu3bqQukceeSRqvSeeeCLUO68gAwAAgAEZAAAASgzIAAAAYEAGAACAEq8gAwAAgAEZAAAAIgzIDz30UGhqaor+NTNQD+Sf3OkBcib/5Ez+aWSTHpD/8Ic/hF27doXrrrsu7oqgDsg/udMD5Ez+yZn80+gmNSCfOHEirF27Njz22GO+vJzsyD+50wPkTP7JmfyTg0kNyJs3bw7f+MY3Qnd396f+7KlTp8LY2Ng5G9Qz+Sd3F9oD8k8jcgwgZ/JPDprLvcHu3bvDwYMHi2+vuBBbt24NDz744GTWBsmRf3JXTg/IP43GMYCcyT+5KOsV5JGRkfCDH/wg/OxnPwvTp0+/oNv09vaG0dHRs1uhBtQj+Sd35faA/NNIHAPImfyTk7JeQR4cHAxHjhwJS5YsOXvdmTNnwksvvRR+8pOfFN9KMXXq1HNu09LSUtyg3sk/uSu3B+SfRuIYQM7kn5yUNSB/9atfDX/605/OuW7Dhg3h6quvDj09PR9rDGgk8k/u9AA5k39yJv/kpKwBecaMGWHRokXnXHfJJZeEyy677GPXQ6ORf3KnB8iZ/JMz+Scnk/4eZAAAAMj6U6z/08DAQJyVQB2Sf3KnB8iZ/JMz+adReQUZAAAADMgAAABQYkAGAACAGOcgp6C9vT36d72l7OjRo9Frvvrqq1HrrVmzJmo9qKShoaGo9bq6uqLW4/z6+vqi1nvkkUdCyvbu3Ru9ZltbW/SaUKnHaC+88EKI7Y477ohab9u2bVHrPfTQQ1HrNZLY/3+1trZGrffkk08m/XilElatWhXqnVeQAQAAwIAMAAAAJQZkAAAAMCADAABAiQEZAAAADMgAAABQYkAGAAAAAzIAAACUGJABAADAgAwAAAAlBmQAAAAwIAMAAECJARkAAAAMyAAAAFBiQAYAAAADMgAAAJQYkAEAAMCADAAAACUGZAAAAAghNIcG0NnZGbXeq6++GrXez3/+86TrVUJPT0+tlwBkYP369VHrDQwMRK33xz/+MWq9b37zmyG2lStXJr1PVq1aFbUen+y+++6LWq+7uztqvaNHj4bYnn/++aj11qxZE7Ue53fTTTdFrXfs2LGo9YaGhpL+9xasW7cuar22trZQ77yCDAAAAAZkAAAAKDEgAwAAgAEZAAAASgzIAAAAMJkB+W9/+1v4zne+Ey677LJw0UUXhWuvvTb6pz5DquSf3OkBcib/5E4PkIPmcj9a/8YbbwwrVqwIv/71r8Pll18eXn/99dDe3l65FUIi5J/c6QFyJv/kTg+Qi7IG5G3btoV58+aFJ5544ux1CxYsqMS6IDnyT+70ADmTf3KnB8hFWW+x/sUvfhGWLVsWVq9eHa644orwhS98ITz22GOfeJtTp06FsbGxczaoR/JP7srtAfmnkTgGkDvHAHJR1oD81ltvhR07doTPfvaz4Te/+U248847w/e///3w5JNPnvc2W7duDa2trWe3wjNPUI/kn9yV2wPyTyNxDCB3jgHkoqwBeXx8PCxZsiT86Ec/Kj5rtGnTprBx48awc+fO896mt7c3jI6Ont1GRkZirBuqTv7JXbk9IP80EscAcucYQC7KGpBnz54dFi5ceM51n//858Pbb7993tu0tLSEmTNnnrNBPZJ/clduD8g/jcQxgNw5BpCLsgbkwifXvfbaa+dc99e//jXMnz8/9rogOfJP7vQAOZN/cqcHyEVZA/Ldd98dDhw4UHxrxRtvvBGeeuqp8NOf/jRs3ry5ciuERMg/udMD5Ez+yZ0eIBdlDcjXX3992Lt3b3j66afDokWLwg9/+MOwffv2sHbt2sqtEBIh/+ROD5Az+Sd3eoBclPU9yAW33XZbcYMcyT+50wPkTP7JnR4gB2W9ggwAAACNyoAMAAAABmQAAACY5DnIKers7Ixab9u2bVHr9fT0RK23bNmyENvg4GD0mtSntra2qPVWrlwZtd5zzz0XYhsYGIhab/369VHrcX5dXV1R6w0NDSVdr6+vL8QWu6c6Ojqi1lu1alXUenyy9vb2qPU2bdoUUrdmzZqo9Xbt2hW1HvUr9mOq0dHREJvHLB/nFWQAAAAwIAMAAECJARkAAAAMyAAAAFBiQAYAAAADMgAAAJQYkAEAAMCADAAAACUGZAAAADAgAwAAQIkBGQAAAAzIAAAAUGJABgAAAAMyAAAAlBiQAQAAwIAMAAAAJQZkAAAAMCADAABASXOosomJieKfY2NjIVX/+te/otYbHx+PWu/DDz8MsaW8Pyrx7/woh9VWD/mPrRJ5je306dNR66W6f+W/+k6cOJFdP506dSpqvZh50QOf7t///nfSj4EqwTGgOuoh/7EdP348pO7kyZNR640lvH8vtAeaJqrcJe+8806YN29eNe8SPmZkZCTMnTu36vcr/6RA/smdHiBn8k/uRj6lB6o+IBeeSXz33XfDjBkzQlNT0ydO+IUmKvwDZs6cWc0l0sD7oxD3wrN5c+bMCVOmVP8MA/mvT42yP+SfyWqUfaIHmIxG2R/yz2SMNdD+uNAeqPpbrAuLKedZq8KOqPed0UgaYX+0trbW7L7lv741wv6Qf/4/GmGf6AEmqxH2h/wzWTMbZH9cSA/4kC4AAAAwIAMAAEDiA3JLS0t44IEHin9Se/ZHdfl9p8X+qC6/7/TYJ9Xl950W+6O6/L7T0pLh/qj6h3QBAABAipJ9BRkAAACqyYAMAAAABmQAAAAoMSADAABAqgPyo48+Gjo6OsL06dPD8uXLwyuvvFLrJWWrr68vNDU1nbNdffXVtV5Ww9MDaZD/2pD/NMh/bch/OvRAbeiBNPRlnP/kBuQ9e/aEe+65p/hx4gcPHgyLFy8Ot9xySzhy5Eitl5ata665Jvz9738/u/3ud7+r9ZIamh5Ii/xXl/ynRf6rS/7ToweqSw+k5ZpM85/cgPzwww+HjRs3hg0bNoSFCxeGnTt3hosvvjg8/vjjtV5atpqbm8NVV111dps1a1atl9TQ9EBa5L+65D8t8l9d8p8ePVBdeiAtzZnmP6kB+fTp02FwcDB0d3efvW7KlCnFyy+//HJN15az119/PcyZMyd0dnaGtWvXhrfffrvWS2pYeiA98l898p8e+a8e+U+THqgePZCe1zPNf1ID8gcffBDOnDkTrrzyynOuL1w+fPhwzdaVs8K5H/39/WHfvn1hx44dYXh4OHz5y18Ox48fr/XSGpIeSIv8V5f8p0X+q0v+06MHqksPpGV5xvlvrvUCSNutt9569u/XXXddsVnmz58fnnnmmfDd7363pmuDSpN/cib/5E4PkLNbM85/Uq8gF97XPnXq1PDee++dc33hcuF979ReW1tb+NznPhfeeOONWi+lIemBtMl/Zcl/2uS/suQ/fXqgsvRA2toyyn9SA/K0adPC0qVLw4svvnj2uvHx8eLlG264oaZro+TEiRPhzTffDLNnz671UhqSHkib/FeW/KdN/itL/tOnBypLD6TtRE75n0jM7t27J1paWib6+/sn/vKXv0xs2rRpoq2tbeLw4cO1XlqW7r333omBgYGJ4eHhid///vcT3d3dE7NmzZo4cuRIrZfWsPRAOuS/+uQ/HfJfffKfFj1QfXogHfdmnP/kzkG+/fbbw/vvvx/uv//+4gn5XV1dxZPD//OEfarjnXfeCd/61rfCP/7xj3D55ZeHL33pS+HAgQPFv1MZeiAd8l998p8O+a8++U+LHqg+PZCOdzLOf1NhSq71IgAAAKDWkjoHGQAAAGrFgAwAAAAGZAAAACgxIAMAAIABGQAAAEoMyAAAAGBABgAAgBIDMgAAABiQAQAAoMSADAAAAAZkAAAAKDEgAwAAEAjhfwDoauCUJedYYAAAAABJRU5ErkJggg==",
      "text/plain": [
       "<Figure size 1200x400 with 5 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "digits = datasets.load_digits()\n",
    "\n",
    "# 查看前 5 个字符\n",
    "fig, axes = plt.subplots(1, 5, figsize=(12, 4))\n",
    "for i, image in enumerate(digits.images[:5]):\n",
    "    axes[i].imshow(image, cmap=plt.cm.gray_r)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4ba356f",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "\n",
    "我们都知道，一个手写字符的数据是由 8x8 的矩阵表示。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "ce0f5c32",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[ 0.,  0.,  5., 13.,  9.,  1.,  0.,  0.],\n",
       "       [ 0.,  0., 13., 15., 10., 15.,  5.,  0.],\n",
       "       [ 0.,  3., 15.,  2.,  0., 11.,  8.,  0.],\n",
       "       [ 0.,  4., 12.,  0.,  0.,  8.,  8.,  0.],\n",
       "       [ 0.,  5.,  8.,  0.,  0.,  9.,  8.,  0.],\n",
       "       [ 0.,  4., 11.,  0.,  1., 12.,  7.,  0.],\n",
       "       [ 0.,  2., 14.,  5., 10., 12.,  0.,  0.],\n",
       "       [ 0.,  0.,  6., 13., 10.,  0.,  0.,  0.]])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "digits.images[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5046edb",
   "metadata": {},
   "outputs": [],
   "source": [
    "array([[ 0.,  0.,  5., 13.,  9.,  1.,  0.,  0.],\n",
    "       [ 0.,  0., 13., 15., 10., 15.,  5.,  0.],\n",
    "       [ 0.,  3., 15.,  2.,  0., 11.,  8.,  0.],\n",
    "       [ 0.,  4., 12.,  0.,  0.,  8.,  8.,  0.],\n",
    "       [ 0.,  5.,  8.,  0.,  0.,  9.,  8.,  0.],\n",
    "       [ 0.,  4., 11.,  0.,  1., 12.,  7.,  0.],\n",
    "       [ 0.,  2., 14.,  5., 10., 12.,  0.,  0.],\n",
    "       [ 0.,  0.,  6., 13., 10.,  0.,  0.,  0.]])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d90d6301",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "如果我们针对该矩阵进行扁平化处理，就能变为 1x64 的向量。对于这样一个高维向量，虽然可以在聚类时直接计算距离，但却无法很好地在二维平面中表示相应的数据点。因为，二维平面中的点只由横坐标和纵坐标组成。 \n",
    "\n",
    "所以，为了尽可能还原聚类的过程，我们需要将 1x64 的行向量（64 维），处理成 1x2 的行向量（2 维），也就是降维的过程。 \n",
    "\n",
    "既然是降低维度，那么应该怎样做呢？是直接取前面两位数，或者随机取出两位？当然不是。这里学习一种新方法，叫 PCA 主成分分析。 \n",
    "\n",
    "##  26.19.  PCA 主成分分析  # \n",
    "\n",
    "主成分分析是多元线性统计里面的概念，它的英文是：Principal Components Analysis，简称 PCA。主成分分析旨在降低数据的维数，通过保留数据集中的主要成分来简化数据集。 \n",
    "\n",
    "主成分分析的数学原理非常简单，通过对协方差矩阵进行特征分解，从而得出主成分（特征向量）与对应的权值（特征值）。然后剔除那些较小特征值（较小权值）对应的特征，从而达到降低数据维数的目的。 \n",
    "\n",
    "主成分分析通常有两个作用： \n",
    "\n",
    "  1. 参考本文的目的，方便将数据用于低维空间可视化。聚类过程中的可视化是很有必要的。 \n",
    "\n",
    "  2. 高维度数据集往往就意味着计算资源的大量消耗。通过对数据进行降维，我们就能在不较大影响结果的同时，减少模型学习时间。 \n",
    "\n",
    "我们将在后面的实验中详细讨论主成分分析的原理和应用。所以，我们这里先直接使用 scikit-learn 中 ` PCA  ` 方法完成： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "40c394ca",
   "metadata": {},
   "outputs": [],
   "source": [
    "sklearn.decomposition.PCA(n_components=None, copy=True, whiten=False, svd_solver='auto')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a7370dc",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "其中： \n",
    "\n",
    "  * ` n_components=  ` 表示需要保留主成分（特征）的数量。 \n",
    "\n",
    "  * ` copy=  ` 表示针对原始数据降维还是针对原始数据副本降维。当参数为 False 时，降维后的原始数据会发生改变，这里默认为 True。 \n",
    "\n",
    "  * ` whiten=  ` 白化表示将特征之间的相关性降低，并使得每个特征具有相同的方差。 \n",
    "\n",
    "  * ` svd_solver=  ` 表示奇异值分解 SVD 的方法。有 4 参数，分别是： ` auto  ` , ` full  ` , ` arpack  ` , ` randomized  ` 。 \n",
    "\n",
    "在使用 PCA 降维时，我们也会使用到 ` PCA.fit()  ` 方法。 ` .fit()  ` 是 scikit-learn 训练模型的通用方法，但是该方法本身返回的是模型的参数。所以，通常我们会使用 ` PCA.fit_transform()  ` 方法直接返回降维后的数据结果。 \n",
    "\n",
    "下面，我们就针对 DIGITS 数据集进行特征降维。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "a0dc93e1",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[ -1.25946645, -21.27488348],\n",
       "       [  7.9576113 ,  20.76869896],\n",
       "       [  6.99192297,   9.95598641],\n",
       "       ...,\n",
       "       [ 10.8012837 ,   6.96025223],\n",
       "       [ -4.87210009, -12.42395362],\n",
       "       [ -0.34438963,  -6.36554919]], shape=(1797, 2))"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.decomposition import PCA\n",
    "\n",
    "# PCA 将数据降为 2 维\n",
    "pca = PCA(n_components=2)\n",
    "pca_data = pca.fit_transform(digits.data)\n",
    "pca_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a76534a",
   "metadata": {},
   "outputs": [],
   "source": [
    "array([[ -1.25946669,  21.27488553],\n",
    "       [  7.95761109, -20.76869371],\n",
    "       [  6.99192328,  -9.95599129],\n",
    "       ...,\n",
    "       [ 10.80128269,  -6.96025666],\n",
    "       [ -4.87209749,  12.42395892],\n",
    "       [ -0.34439008,   6.36554125]])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "034a5d30",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "可以看到，每一行的特征已经由先前的 64 个缩减为 2 个了。接下来将降维后的数据绘制到二维平面中。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2591e4c4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 8))\n",
    "plt.scatter(pca_data[:, 0], pca_data[:, 1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae992f32",
   "metadata": {},
   "outputs": [],
   "source": [
    "<matplotlib.collections.PathCollection at 0x15d4070a0>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96487fb7",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "[ ![../_images/08a07e87e8cb13ecde9519f52d8a1ee1fac2d4aae814f95142d1c3579aadd258.png](../_images/08a07e87e8cb13ecde9519f52d8a1ee1fac2d4aae814f95142d1c3579aadd258.png) ](../_images/08a07e87e8cb13ecde9519f52d8a1ee1fac2d4aae814f95142d1c3579aadd258.png)\n",
    "\n",
    "上图就是 DIGITS 数据集中 1797 个样本通过 PCA 降维后对应在二维平面的数据点。 \n",
    "\n",
    "现在，我们可以直接使用 BIRCH 对降维后的数据进行聚类。由于我们提前知道这是手写数字字符，所以选择聚为 ` 10  ` 类。当然，在聚类时，我们只是知道大致要聚集的类别数量，而并不知道数据对应的标签值。 \n",
    "\n",
    "BIRCH 在 scikit-learn 对应的主要类及参数如下： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "40df16f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "sklearn.cluster.Birch(threshold=0.5, branching_factor=50, n_clusters=3, compute_labels=True, copy=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c68d355",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "其中： \n",
    "\n",
    "  * ` threshold  ` : 每个 CF 的空间阈值  $\\tau$  。参数值越小，则 CF 特征树的规模会越大，学习时花费的时间和内存会越多。默认值是 0.5，但如果样本的方差较大，则一般需要增大这个默认值。 \n",
    "\n",
    "  * ` branching_factor  ` : CF 树中所有节点的最大 CF 数。该参数默认为 50，如果样本量非常大，一般需要增大这个默认值。 \n",
    "\n",
    "  * ` n_clusters  ` : 虽然层次聚类无需预先设定类别数量，但可以设定期望查询的类别数。 \n",
    "\n",
    "接下来，使用 BIRCH 算法得到 PCA 降维后数据的聚类结果： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a034dd48",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "from sklearn.cluster import Birch\n",
    "\n",
    "birch = Birch(n_clusters=10)\n",
    "cluster_pca = birch.fit_predict(pca_data)\n",
    "cluster_pca"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ede1a77",
   "metadata": {},
   "outputs": [],
   "source": [
    "array([3, 0, 0, ..., 0, 5, 9])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44fe4086",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "利用得到的聚类结果对散点图进行着色。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "755b95be",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 8))\n",
    "plt.scatter(pca_data[:, 0], pca_data[:, 1], c=cluster_pca)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bac47f96",
   "metadata": {},
   "outputs": [],
   "source": [
    "<matplotlib.collections.PathCollection at 0x15f474910>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b308a7db",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "[ ![../_images/eabc8008c067988dbfd3a2aac04c03e66c657d037d04a7efe0b80b4087140db1.png](../_images/eabc8008c067988dbfd3a2aac04c03e66c657d037d04a7efe0b80b4087140db1.png) ](../_images/eabc8008c067988dbfd3a2aac04c03e66c657d037d04a7efe0b80b4087140db1.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3a3737d4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "# 计算聚类过程中的决策边界\n",
    "x_min, x_max = pca_data[:, 0].min() - 1, pca_data[:, 0].max() + 1\n",
    "y_min, y_max = pca_data[:, 1].min() - 1, pca_data[:, 1].max() + 1\n",
    "\n",
    "xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.4), np.arange(y_min, y_max, 0.4))\n",
    "temp_cluster = birch.predict(np.c_[xx.ravel(), yy.ravel()])\n",
    "\n",
    "# 将决策边界绘制出来\n",
    "temp_cluster = temp_cluster.reshape(xx.shape)\n",
    "plt.figure(figsize=(10, 8))\n",
    "plt.contourf(xx, yy, temp_cluster, cmap=plt.cm.bwr, alpha=0.3)\n",
    "plt.scatter(pca_data[:, 0], pca_data[:, 1], c=cluster_pca, s=15)\n",
    "\n",
    "# 图像参数设置\n",
    "plt.xlim(x_min, x_max)\n",
    "plt.ylim(y_min, y_max)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5199397b",
   "metadata": {},
   "outputs": [],
   "source": [
    "(-28.494444264008855, 31.092210398474464)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0656bf2",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "[ ![../_images/4e520923e83dcc318cbc7fe06ca489f9026cd35607364ec3026bf62d455a13fe.png](../_images/4e520923e83dcc318cbc7fe06ca489f9026cd35607364ec3026bf62d455a13fe.png) ](../_images/4e520923e83dcc318cbc7fe06ca489f9026cd35607364ec3026bf62d455a13fe.png)\n",
    "\n",
    "其实，我们可以利用预先知道的各字符对应的标签对散点图进行着色，对比上面的聚类结果。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53eb53a4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 8))\n",
    "plt.scatter(pca_data[:, 0], pca_data[:, 1], c=digits.target)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1fc04e6d",
   "metadata": {},
   "outputs": [],
   "source": [
    "<matplotlib.collections.PathCollection at 0x15f4d7ca0>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae6d13cc",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "[ ![../_images/667a37919b3fd30291d87a9d8f9158e25c4974a1f243247fb96530b0f8840b99.png](../_images/667a37919b3fd30291d87a9d8f9158e25c4974a1f243247fb96530b0f8840b99.png) ](../_images/667a37919b3fd30291d87a9d8f9158e25c4974a1f243247fb96530b0f8840b99.png)\n",
    "\n",
    "对照两幅图片，你会发现对 PCA 降维数据的聚类结果大致符合原数据的分布趋势。这里色块的颜色不对应没有关系，因为原标签和聚类标签的顺序不对应，只需要关注数据块的分布规律即可。 \n",
    "\n",
    "不过，使用真实标签绘制出来的散点图明显凌乱很多，这其实是由于 PCA 降维造成的。一般情况下，我们输入到聚类模型中的数据不一定要是降维后的数据。下面输入原数据重新聚类试一试。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4dc5f5ef",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "cluster_ori = birch.fit_predict(digits.data)\n",
    "cluster_ori"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1ca6565b",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "array([7, 9, 4, ..., 4, 1, 4])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f094b753",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 8))\n",
    "plt.scatter(pca_data[:, 0], pca_data[:, 1], c=cluster_ori)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4ad7585e",
   "metadata": {},
   "outputs": [],
   "source": [
    "<matplotlib.collections.PathCollection at 0x15f959ea0>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "143d69f8",
   "metadata": {},
   "source": [
    "[ ![../_images/7dc96827337058590d2c7a04481cf2388cb28f0631ceefb835697df58d74676d.png](../_images/7dc96827337058590d2c7a04481cf2388cb28f0631ceefb835697df58d74676d.png) ](../_images/7dc96827337058590d2c7a04481cf2388cb28f0631ceefb835697df58d74676d.png)\n",
    "\n",
    "现在你会发现，实验得到的聚类结果更加符合原数据集的分布规律了。再次强调，这里颜色不分离其实是由于 PCA 降维后在二维平面可视化的效果，不代表真实的聚类效果。 \n",
    "\n",
    "不过，最后我们再强调一下 PCA 的使用情形。一般情况下，我们不会拿到数据就进行 PCA 处理，只有当算法不尽如人意、训练时间太长、需要可视化等情形才考虑使用 PCA。其主要原因是，PCA 被看作是对数据的有损压缩，会造成数据集原始特征丢失。 \n",
    "\n",
    "最后，通过表格对比本次实验的 3 种层次聚类法的优缺点： \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749852.jpg](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749852.jpg) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6102timestamp1531805749852.jpg)\n",
    "\n",
    "##  26.20.  总结  # \n",
    "\n",
    "本次实验了解了层次聚类方法，特别地学习了向上、向下以及 BIRCH 算法。其中，比较常用的是自底向上或 BIRCH 方法，且 BIRCH 拥有计算高效的特点。不过，BIRCH 也有一些弊端，例如对高维数据的聚类效果往往不太好，有时候我们也会使用 Mini Batch K-Means 进行替代。 \n",
    "\n",
    "相关链接 \n",
    "\n",
    "  * [ Hierarchical clustering - Wikipedia ](https://en.wikipedia.org/wiki/Hierarchical_clustering)\n",
    "\n",
    "  * [ BIRCH - Wikipedia ](https://en.wikipedia.org/wiki/BIRCH)\n"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  },
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
