{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "66e0d34e",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "下面我们实现*k*均值算法，进行文本聚类。这里使用的数据集与第4章的数据集类似，包含3种主题约1万本图书的信息，但文本内容是图书摘要而非标题。首先我们复用第4章的代码进行预处理。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "ce835777",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "train size = 8627 , test size = 2157\n",
      "{0: '计算机类', 1: '艺术传媒类', 2: '经管类'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████████████████████████████████████████████████████████████████████████| 8627/8627 [02:58<00:00, 48.41it/s]\n",
      "100%|██████████████████████████████████████████████████████████████████████████████| 2157/2157 [00:44<00:00, 48.51it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "unique tokens = 34252, total counts = 806900, max freq = 19197, min freq = 1\n",
      "min_freq = 3, min_len = 2, max_size = None, remaining tokens = 9504, in-vocab rate = 0.8910459784359895\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import sys\n",
    "\n",
    "# 导入前面实现的Books数据集\n",
    "sys.path.append('./code')\n",
    "from my_utils import BooksDataset\n",
    "\n",
    "dataset = BooksDataset()\n",
    "# 打印出类和标签ID\n",
    "print(dataset.id2label)\n",
    "\n",
    "dataset.tokenize(attr='abstract')\n",
    "dataset.build_vocab(min_freq=3)\n",
    "dataset.convert_tokens_to_ids()\n",
    "\n",
    "train_data, test_data = dataset.train_data, dataset.test_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96285754",
   "metadata": {},
   "source": [
    "接下来导入实现TF-IDF算法的函数，将处理后的数据集输入到函数中，得到文档特征："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "1a16e90b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(8627, 9504)\n"
     ]
    }
   ],
   "source": [
    "# 导入之前实现的TF-IDF算法\n",
    "from my_utils import TFIDF\n",
    "\n",
    "vocab_size = len(dataset.token2id)\n",
    "train_X = []\n",
    "for data in train_data:\n",
    "    train_X.append(data['token_ids'])\n",
    "# 对TF-IDF的结果进行归一化（norm='l2'）对聚类非常重要，\n",
    "# 不经过归一化会导致数据在某些方向上过于分散从而聚类失败\n",
    "# 初始化TFIDF()函数\n",
    "tfidf = TFIDF(vocab_size, norm='l2', smooth_idf=True, sublinear_tf=True)\n",
    "# 计算词频率和逆文档频率\n",
    "tfidf.fit(train_X)\n",
    "# 转化为TF-IDF向量\n",
    "train_F = tfidf.transform(train_X)\n",
    "print(train_F.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8f04c44",
   "metadata": {},
   "source": [
    "在有了数据之后，运行*k*均值聚类算法为文本进行聚类。我们需要事先确定簇数$K$。为了方便与实际的标签数据进行对比，这里假设$K$为3。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "d8493c19",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-----------初始化-----------\n",
      "-----------初始化完成-----------\n",
      "第1步，中心点平均移动距离：0.08388954150305489\n",
      "第2步，中心点平均移动距离：0.049141358491505444\n",
      "第3步，中心点平均移动距离：0.021989238015728777\n",
      "第4步，中心点平均移动距离：0.011405166746971847\n",
      "第5步，中心点平均移动距离：0.007266845582171059\n",
      "第6步，中心点平均移动距离：0.004477562120517176\n",
      "第7步，中心点平均移动距离：0.002502414639464703\n",
      "第8步，中心点平均移动距离：0.0020969301999047493\n",
      "第9步，中心点平均移动距离：0.001402562032118973\n",
      "第10步，中心点平均移动距离：0.0014152647062448823\n",
      "第11步，中心点平均移动距离：0.0010521922205282135\n",
      "第12步，中心点平均移动距离：0.0009258959025506048\n",
      "第13步，中心点平均移动距离：0.0007552705413307459\n",
      "第14步，中心点平均移动距离：0.0007675308956893839\n",
      "第15步，中心点平均移动距离：0.0006505170684357614\n",
      "第16步，中心点平均移动距离：0.0005170134419422099\n",
      "第17步，中心点平均移动距离：0.00046345558811682297\n",
      "第18步，中心点平均移动距离：0.0004573737786987787\n",
      "第19步，中心点平均移动距离：0.0\n",
      "中心点不再移动，退出程序\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 更改簇的标签数量\n",
    "K = 3\n",
    "\n",
    "class KMeans:\n",
    "    def __init__(self, K, dim, stop_val = 1e-4, max_step = 100):\n",
    "        self.K = K\n",
    "        self.dim = dim\n",
    "        self.stop_val = stop_val\n",
    "        self.max_step = max_step\n",
    "\n",
    "    def update_mean_vec(self, X):\n",
    "        mean_vec = np.zeros([self.K, self.dim])\n",
    "        for k in range(self.K):\n",
    "            data = X[self.cluster_num == k]\n",
    "            if len(data) > 0:\n",
    "                mean_vec[k] = data.mean(axis=0)\n",
    "        return mean_vec\n",
    "    \n",
    "    # 运行k均值算法的迭代循环\n",
    "    def fit(self, X):\n",
    "        print('-----------初始化-----------')\n",
    "        N = len(X)\n",
    "        dim = len(X[0])\n",
    "        # 给每个数据点随机分配簇\n",
    "        self.cluster_num = np.random.randint(0, self.K, N)\n",
    "        self.mean_vec = self.update_mean_vec(X)\n",
    "        \n",
    "        print('-----------初始化完成-----------')\n",
    "        global_step = 0\n",
    "        while global_step < self.max_step:\n",
    "            global_step += 1\n",
    "            self.cluster_num = np.zeros(N, int) \n",
    "            for i, data_point in enumerate(X):\n",
    "                # 计算每个数据点和每个簇中心的L2距离\n",
    "                dist = np.linalg.norm(data_point[None, :] - \\\n",
    "                    self.mean_vec, ord=2, axis=-1)\n",
    "                # 找到每个数据点所属新的聚类\n",
    "                self.cluster_num[i] = dist.argmin(-1)\n",
    "\n",
    "            '''\n",
    "            上面的循环过程也可以以下面的代码进行并行处理，但是可能\n",
    "            会使得显存过大，建议在数据点的特征向量维度较小时\n",
    "            或者进行降维后使用\n",
    "            # N x D - K x D -> N x K x D\n",
    "            dist = np.linalg.norm(train_X[:,None,:] - self.mean_vec, \\\n",
    "                ord = 2, axis = -1) \n",
    "            # 找到每个数据点所属新的聚类\n",
    "            self.cluster_num = dist.argmin(-1)\n",
    "            '''\n",
    "\n",
    "            new_mean_vec = self.update_mean_vec(X)\n",
    "\n",
    "            # 计算新的簇中心点和上一步迭代的中心点的距离\n",
    "            moving_dist = np.linalg.norm(new_mean_vec - self.mean_vec,\\\n",
    "                ord = 2, axis = -1).mean()\n",
    "            print(f\"第{global_step}步，中心点平均移动距离：{moving_dist}\")\n",
    "            if moving_dist < self.stop_val:\n",
    "                print(\"中心点不再移动，退出程序\")\n",
    "                break\n",
    "\n",
    "            # 将mean_vec更新\n",
    "            self.mean_vec = new_mean_vec\n",
    "\n",
    "kmeans = KMeans(K, train_F.shape[1])\n",
    "kmeans.fit(train_F)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8f3f765",
   "metadata": {},
   "source": [
    "为了更直观地展示聚类的效果，我们定义show_clusters()这个函数，显示每个真实分类下包含的每个簇的比重。下面对*k*均值算法的聚类结果进行展示，并观察3个标签中不同簇的占比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "1c3158e6",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "8627\n",
      "计算机类:\t{ 0: 3084(0.80), 1: 13(0.00), 2: 745(0.19), }\n",
      "艺术传媒类:\t{ 0: 111(0.05), 1: 1404(0.61), 2: 785(0.34), }\n",
      "经管类:\t{ 0: 51(0.02), 1: 1(0.00), 2: 2433(0.98), }\n"
     ]
    }
   ],
   "source": [
    "# 取出每条数据的标签和标签ID\n",
    "labels = []\n",
    "for data in train_data:\n",
    "    labels.append(data['label'])\n",
    "print(len(labels))\n",
    "\n",
    "# 展示聚类结果\n",
    "def show_clusters(clusters, K):\n",
    "    # 每个标签下的数据可能被聚类到不同的簇，因此对所有标签、所有簇进行初始化\n",
    "    label_clusters = {label_id: {} for label_id in dataset.id2label}\n",
    "    for k, v in label_clusters.items():\n",
    "        label_clusters[k] = {i: 0 for i in range(K)}\n",
    "    # 统计每个标签下，分到每个簇的数据条数\n",
    "    for label_id, cluster_id in zip(labels, clusters):\n",
    "        label_clusters[label_id][cluster_id] += 1\n",
    "        \n",
    "    for label_id in sorted(dataset.id2label.keys()):\n",
    "        _str = dataset.id2label[label_id] + ':\\t{ '\n",
    "        for cluster_id in range(K):\n",
    "            # 计算label_id这个标签ID下，簇为cluster_id的占比\n",
    "            _cnt = label_clusters[label_id][cluster_id]\n",
    "            _total = sum(label_clusters[label_id].values())\n",
    "            _str += f'{str(cluster_id)}: {_cnt}({_cnt / _total:.2f}), '\n",
    "        _str += '}'\n",
    "        print(_str)\n",
    "\n",
    "clusters = kmeans.cluster_num\n",
    "show_clusters(clusters, K)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be29cb62",
   "metadata": {},
   "source": [
    "接下来演示如何使用高斯混合来进行聚类。注意高斯混合会计算每个数据点归属于各簇的概率分布，这里将概率最高的簇作为聚类输出。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "9353dbb8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.stats import multivariate_normal as gaussian\n",
    "from tqdm import tqdm\n",
    "\n",
    "# 高斯混合模型\n",
    "class GMM:\n",
    "    def __init__(self, K, dim, max_iter=100):\n",
    "        # K为聚类数目，dim为向量维度，max_iter为最大迭代次数\n",
    "        self.K = K\n",
    "        self.dim = dim\n",
    "        self.max_iter = max_iter\n",
    "        \n",
    "        # 初始化，pi = 1/K为先验概率，miu ~[-1,1]为高斯分布的均值，\n",
    "        # sigma = eye为高斯分布的协方差矩阵\n",
    "        self.pi = np.ones(K) / K\n",
    "        self.miu = np.random.rand(K, dim) * 2 - 1\n",
    "        self.sigma = np.zeros((K, dim, dim))\n",
    "        for i in range(K):\n",
    "            self.sigma[i] = np.eye(dim)\n",
    "        \n",
    "    # GMM的E步骤\n",
    "    def E_step(self, X):\n",
    "        # 计算每个数据点被分到不同簇的密度\n",
    "        for i in range(self.K):\n",
    "            self.Y[:, i] = self.pi[i] * gaussian.pdf(X, \\\n",
    "                mean=self.miu[i], cov=self.sigma[i])\n",
    "        # 对密度进行归一化，得到概率分布\n",
    "        self.Y /= self.Y.sum(axis=1, keepdims=True)\n",
    "    \n",
    "    # GMM的M步骤\n",
    "    def M_step(self, X):\n",
    "        # 更新先验概率分布\n",
    "        Y_sum = self.Y.sum(axis=0)\n",
    "        self.pi = Y_sum / self.N\n",
    "        # 更新每个簇的均值\n",
    "        self.miu = np.matmul(self.Y.T, X) / Y_sum[:, None]\n",
    "        # 更新每个簇的协方差矩阵\n",
    "        for i in range(self.K):\n",
    "            # N * 1 * D\n",
    "            delta = np.expand_dims(X, axis=1) - self.miu[i]\n",
    "            # N * D * D\n",
    "            sigma = np.matmul(delta.transpose(0, 2, 1), delta)\n",
    "            # D * D\n",
    "            self.sigma[i] = np.matmul(sigma.transpose(1, 2, 0),\\\n",
    "                self.Y[:, i]) / Y_sum[i]\n",
    "    \n",
    "    # 计算对数似然，用于判断迭代终止\n",
    "    def log_likelihood(self, X):\n",
    "        ll = 0\n",
    "        for x in X:\n",
    "            p = 0\n",
    "            for i in range(self.K):\n",
    "                p += self.pi[i] * gaussian.pdf(x, mean=self.miu[i],\\\n",
    "                    cov=self.sigma[i])\n",
    "            ll += np.log(p)\n",
    "        return ll / self.N\n",
    "    \n",
    "    # 运行GMM算法的E步骤、M步骤迭代循环\n",
    "    def fit(self, X):\n",
    "        self.N = len(X)\n",
    "        self.Y = np.zeros((self.N, self.K))\n",
    "        ll = self.log_likelihood(X)\n",
    "        print('开始迭代')\n",
    "        for i in range(self.max_iter):\n",
    "            self.E_step(X)\n",
    "            self.M_step(X)\n",
    "            new_ll = self.log_likelihood(X)\n",
    "            print(f'第{i}步, log-likelihood = {new_ll:.4f}')\n",
    "            if new_ll - ll < 1e-4:\n",
    "                print('log-likelihood不再变化，退出程序')\n",
    "                break\n",
    "            else:\n",
    "                ll = new_ll\n",
    "    \n",
    "    # 根据学习到的参数将一个数据点分配到概率最大的簇\n",
    "    def transform(self, X):\n",
    "        assert hasattr(self, 'Y') and len(self.Y) == len(X)\n",
    "        return np.argmax(self.Y, axis=1)\n",
    "    \n",
    "    def fit_transform(self, X):\n",
    "        self.fit(X)\n",
    "        return self.transform(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e5c256e",
   "metadata": {},
   "source": [
    "与*k*均值聚类方法类似，在使用最大期望值法的高斯混合的情况下，观察在Books数据集3个真实类别中不同簇的占比："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "259eb004",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始迭代\n",
      "第0步, log-likelihood = 77.5675\n",
      "第1步, log-likelihood = 89.1667\n",
      "第2步, log-likelihood = 92.2415\n",
      "第3步, log-likelihood = 93.2498\n",
      "第4步, log-likelihood = 93.9960\n",
      "第5步, log-likelihood = 94.6869\n",
      "第6步, log-likelihood = 95.2511\n",
      "第7步, log-likelihood = 95.4360\n",
      "第8步, log-likelihood = 95.5955\n",
      "第9步, log-likelihood = 95.8093\n",
      "第10步, log-likelihood = 96.0249\n",
      "第11步, log-likelihood = 96.0735\n",
      "第12步, log-likelihood = 96.1138\n",
      "第13步, log-likelihood = 96.1502\n",
      "第14步, log-likelihood = 96.2036\n",
      "第15步, log-likelihood = 96.2624\n",
      "第16步, log-likelihood = 96.3106\n",
      "第17步, log-likelihood = 96.3696\n",
      "第18步, log-likelihood = 96.3967\n",
      "第19步, log-likelihood = 96.4185\n",
      "第20步, log-likelihood = 96.4557\n",
      "第21步, log-likelihood = 96.4871\n",
      "第22步, log-likelihood = 96.5115\n",
      "第23步, log-likelihood = 96.5509\n",
      "第24步, log-likelihood = 96.6996\n",
      "第25步, log-likelihood = 96.8293\n",
      "第26步, log-likelihood = 96.8785\n",
      "第27步, log-likelihood = 96.9203\n",
      "第28步, log-likelihood = 96.9978\n",
      "第29步, log-likelihood = 97.0937\n",
      "第30步, log-likelihood = 97.1200\n",
      "第31步, log-likelihood = 97.1639\n",
      "第32步, log-likelihood = 97.2111\n",
      "第33步, log-likelihood = 97.2649\n",
      "第34步, log-likelihood = 97.3143\n",
      "第35步, log-likelihood = 97.3623\n",
      "第36步, log-likelihood = 97.4096\n",
      "第37步, log-likelihood = 97.4649\n",
      "第38步, log-likelihood = 97.5401\n",
      "第39步, log-likelihood = 97.6544\n",
      "第40步, log-likelihood = 97.7679\n",
      "第41步, log-likelihood = 98.0195\n",
      "第42步, log-likelihood = 98.2845\n",
      "第43步, log-likelihood = 98.3550\n",
      "第44步, log-likelihood = 98.3969\n",
      "第45步, log-likelihood = 98.4460\n",
      "第46步, log-likelihood = 98.4943\n",
      "第47步, log-likelihood = 98.5400\n",
      "第48步, log-likelihood = 98.5611\n",
      "第49步, log-likelihood = 98.5669\n",
      "第50步, log-likelihood = 98.5693\n",
      "第51步, log-likelihood = 98.5701\n",
      "第52步, log-likelihood = 98.5705\n",
      "第53步, log-likelihood = 98.5707\n",
      "第54步, log-likelihood = 98.5710\n",
      "第55步, log-likelihood = 98.5711\n",
      "第56步, log-likelihood = 98.5712\n",
      "log-likelihood不再变化，退出程序\n",
      "[0 1 0 ... 0 2 2]\n",
      "计算机类:\t{ 0: 2489(0.65), 1: 42(0.01), 2: 1311(0.34), }\n",
      "艺术传媒类:\t{ 0: 12(0.01), 1: 1951(0.85), 2: 337(0.15), }\n",
      "经管类:\t{ 0: 1461(0.59), 1: 506(0.20), 2: 518(0.21), }\n"
     ]
    }
   ],
   "source": [
    "# 直接对TF-IDF特征聚类运行速度过慢，因此使用PCA降维，将TF-IDF向量降到50维\n",
    "from sklearn.decomposition import PCA\n",
    "pca = PCA(n_components=50)\n",
    "train_P = pca.fit_transform(train_F)\n",
    "\n",
    "# 运行GMM算法，展示聚类结果\n",
    "gmm = GMM(K, dim=train_P.shape[1])\n",
    "clusters = gmm.fit_transform(train_P)\n",
    "print(clusters)\n",
    "show_clusters(clusters, K)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b58851b6",
   "metadata": {},
   "source": [
    "下面演示基于朴素贝叶斯模型的聚类算法实现："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "f215a250",
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.special import logsumexp\n",
    "\n",
    "# 无监督朴素贝叶斯\n",
    "class UnsupervisedNaiveBayes:\n",
    "    def __init__(self, K, dim, max_iter=100):\n",
    "        self.K = K\n",
    "        self.dim = dim\n",
    "        self.max_iter = max_iter\n",
    "        \n",
    "        # 初始化参数，pi为先验概率分布，P用于保存K个朴素贝叶斯模型的参数\n",
    "        self.pi = np.ones(K) / K\n",
    "        self.P = np.random.random((K, dim))\n",
    "        self.P /= self.P.sum(axis=1, keepdims=True)\n",
    "        \n",
    "    # E步骤\n",
    "    def E_step(self, X):\n",
    "        # 根据朴素贝叶斯公式，计算每个数据点分配到每个簇的概率分布\n",
    "        for i, x in enumerate(X):\n",
    "            # 由于朴素贝叶斯使用了许多概率连乘，容易导致精度溢出，\n",
    "            # 因此使用对数概率\n",
    "            self.Y[i, :] = np.log(self.pi) + (np.log(self.P) *\\\n",
    "                x).sum(axis=1)\n",
    "            # 使用对数概率、logsumexp和exp，等价于直接计算概率，\n",
    "            # 好处是数值更加稳定\n",
    "            self.Y[i, :] -= logsumexp(self.Y[i, :])\n",
    "            self.Y[i, :] = np.exp(self.Y[i, :])\n",
    "    \n",
    "    # M步骤\n",
    "    def M_step(self, X):\n",
    "        # 根据估计的簇概率分布更新先验概率分布\n",
    "        self.pi = self.Y.sum(axis=0) / self.N\n",
    "        self.pi /= self.pi.sum()\n",
    "        # 更新每个朴素贝叶斯模型的参数\n",
    "        for i in range(self.K):\n",
    "            self.P[i] = (self.Y[:, i:i+1] * X).sum(axis=0) / \\\n",
    "                (self.Y[:, i] * X.sum(axis=1)).sum()\n",
    "        # 防止除0\n",
    "        self.P += 1e-10\n",
    "        self.P /= self.P.sum(axis=1, keepdims=True)\n",
    "    \n",
    "    # 计算对数似然，用于判断迭代终止\n",
    "    def log_likelihood(self, X):\n",
    "        ll = 0\n",
    "        for x in X:\n",
    "            # 使用对数概率和logsumexp防止精度溢出\n",
    "            logp = []\n",
    "            for i in range(self.K):\n",
    "                logp.append(np.log(self.pi[i]) + (np.log(self.P[i]) *\\\n",
    "                    x).sum())\n",
    "            ll += logsumexp(logp)\n",
    "        return ll / len(X)\n",
    "    \n",
    "    # 无监督朴素贝叶斯的迭代循环\n",
    "    def fit(self, X):\n",
    "        self.N = len(X)\n",
    "        self.Y = np.zeros((self.N, self.K))\n",
    "        ll = self.log_likelihood(X)\n",
    "        print(f'初始化log-likelihood = {ll:.4f}')\n",
    "        print('开始迭代')\n",
    "        for i in range(self.max_iter):\n",
    "            self.E_step(X)\n",
    "            self.M_step(X)\n",
    "            new_ll = self.log_likelihood(X)\n",
    "            print(f'第{i}步, log-likelihood = {new_ll:.4f}')\n",
    "            if new_ll - ll < 1e-4:\n",
    "                print('log-likelihood不再变化，退出程序')\n",
    "                break\n",
    "            else:\n",
    "                ll = new_ll\n",
    "    \n",
    "    def transform(self, X):\n",
    "        assert hasattr(self, 'Y') and len(self.Y) == len(X)\n",
    "        return np.argmax(self.Y, axis=1)\n",
    "    \n",
    "    def fit_transform(self, X):\n",
    "        self.fit(X)\n",
    "        return self.transform(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "57113e8b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "初始化log-likelihood = -776.9423\n",
      "开始迭代\n",
      "第0步, log-likelihood = -590.4321\n",
      "第1步, log-likelihood = -585.1254\n",
      "第2步, log-likelihood = -582.1384\n",
      "第3步, log-likelihood = -580.0885\n",
      "第4步, log-likelihood = -578.6080\n",
      "第5步, log-likelihood = -577.6417\n",
      "第6步, log-likelihood = -577.1351\n",
      "第7步, log-likelihood = -576.8146\n",
      "第8步, log-likelihood = -576.4717\n",
      "第9步, log-likelihood = -576.2265\n",
      "第10步, log-likelihood = -576.0407\n",
      "第11步, log-likelihood = -575.8656\n",
      "第12步, log-likelihood = -575.6803\n",
      "第13步, log-likelihood = -575.3848\n",
      "第14步, log-likelihood = -575.0835\n",
      "第15步, log-likelihood = -574.9549\n",
      "第16步, log-likelihood = -574.8489\n",
      "第17步, log-likelihood = -574.7645\n",
      "第18步, log-likelihood = -574.7048\n",
      "第19步, log-likelihood = -574.6518\n",
      "第20步, log-likelihood = -574.6002\n",
      "第21步, log-likelihood = -574.5424\n",
      "第22步, log-likelihood = -574.4709\n",
      "第23步, log-likelihood = -574.4097\n",
      "第24步, log-likelihood = -574.3393\n",
      "第25步, log-likelihood = -574.2187\n",
      "第26步, log-likelihood = -574.0763\n",
      "第27步, log-likelihood = -573.9881\n",
      "第28步, log-likelihood = -573.9545\n",
      "第29步, log-likelihood = -573.9347\n",
      "第30步, log-likelihood = -573.9182\n",
      "第31步, log-likelihood = -573.8969\n",
      "第32步, log-likelihood = -573.8856\n",
      "第33步, log-likelihood = -573.8747\n",
      "第34步, log-likelihood = -573.8615\n",
      "第35步, log-likelihood = -573.8513\n",
      "第36步, log-likelihood = -573.8352\n",
      "第37步, log-likelihood = -573.8269\n",
      "第38步, log-likelihood = -573.8197\n",
      "第39步, log-likelihood = -573.8060\n",
      "第40步, log-likelihood = -573.7852\n",
      "第41步, log-likelihood = -573.7737\n",
      "第42步, log-likelihood = -573.7516\n",
      "第43步, log-likelihood = -573.7100\n",
      "第44步, log-likelihood = -573.6784\n",
      "第45步, log-likelihood = -573.6340\n",
      "第46步, log-likelihood = -573.5735\n",
      "第47步, log-likelihood = -573.5151\n",
      "第48步, log-likelihood = -573.4547\n",
      "第49步, log-likelihood = -573.3714\n",
      "第50步, log-likelihood = -573.3092\n",
      "第51步, log-likelihood = -573.2388\n",
      "第52步, log-likelihood = -573.1420\n",
      "第53步, log-likelihood = -573.0407\n",
      "第54步, log-likelihood = -572.9245\n",
      "第55步, log-likelihood = -572.8520\n",
      "第56步, log-likelihood = -572.7605\n",
      "第57步, log-likelihood = -572.6410\n",
      "第58步, log-likelihood = -572.5134\n",
      "第59步, log-likelihood = -572.3964\n",
      "第60步, log-likelihood = -572.2978\n",
      "第61步, log-likelihood = -572.2151\n",
      "第62步, log-likelihood = -572.1729\n",
      "第63步, log-likelihood = -572.1249\n",
      "第64步, log-likelihood = -572.0520\n",
      "第65步, log-likelihood = -571.9574\n",
      "第66步, log-likelihood = -571.8832\n",
      "第67步, log-likelihood = -571.8112\n",
      "第68步, log-likelihood = -571.7362\n",
      "第69步, log-likelihood = -571.6462\n",
      "第70步, log-likelihood = -571.5709\n",
      "第71步, log-likelihood = -571.4907\n",
      "第72步, log-likelihood = -571.4312\n",
      "第73步, log-likelihood = -571.3633\n",
      "第74步, log-likelihood = -571.2984\n",
      "第75步, log-likelihood = -571.1960\n",
      "第76步, log-likelihood = -571.1359\n",
      "第77步, log-likelihood = -571.0780\n",
      "第78步, log-likelihood = -571.0413\n",
      "第79步, log-likelihood = -571.0001\n",
      "第80步, log-likelihood = -570.9497\n",
      "第81步, log-likelihood = -570.8287\n",
      "第82步, log-likelihood = -570.5297\n",
      "第83步, log-likelihood = -570.0153\n",
      "第84步, log-likelihood = -569.6964\n",
      "第85步, log-likelihood = -569.4845\n",
      "第86步, log-likelihood = -569.3923\n",
      "第87步, log-likelihood = -569.3002\n",
      "第88步, log-likelihood = -569.2679\n",
      "第89步, log-likelihood = -569.2464\n",
      "第90步, log-likelihood = -569.2150\n",
      "第91步, log-likelihood = -569.1928\n",
      "第92步, log-likelihood = -569.1741\n",
      "第93步, log-likelihood = -569.1655\n",
      "第94步, log-likelihood = -569.1460\n",
      "第95步, log-likelihood = -569.1435\n",
      "第96步, log-likelihood = -569.1419\n",
      "第97步, log-likelihood = -569.1409\n",
      "第98步, log-likelihood = -569.1406\n",
      "第99步, log-likelihood = -569.1404\n",
      "[1 0 1 ... 1 0 1]\n",
      "计算机类:\t{ 0: 913(0.24), 1: 2182(0.57), 2: 747(0.19), }\n",
      "艺术传媒类:\t{ 0: 2122(0.92), 1: 27(0.01), 2: 151(0.07), }\n",
      "经管类:\t{ 0: 53(0.02), 1: 463(0.19), 2: 1969(0.79), }\n"
     ]
    }
   ],
   "source": [
    "# 根据朴素贝叶斯模型，需要统计出每个数据点包含的词表中每个词的数目\n",
    "train_C = np.zeros((len(train_X), vocab_size))\n",
    "for i, data in enumerate(train_X):\n",
    "    for token_id in data:\n",
    "        train_C[i, token_id] += 1\n",
    "\n",
    "unb = UnsupervisedNaiveBayes(K, dim=vocab_size, max_iter=100)\n",
    "clusters = unb.fit_transform(train_C)\n",
    "print(clusters)\n",
    "show_clusters(clusters, K)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "de7f7bdc-15ab-437c-8ae0-35c109d92399",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c29137ba-8d23-4b54-baa9-fb028d6133c0",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
