{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Cluster-GCN 原理与实践\n",
        "\n",
        "\n",
        "https://zhuanlan.zhihu.com/p/593856211\n",
        "\n",
        "https://www.cnblogs.com/mingye7/p/14995668.html\n",
        "\n",
        "https://blog.csdn.net/LuoMin2523/article/details/118394828\n",
        "\n",
        "# 包安装方法\n",
        "\n",
        "torch_scatter、torch_sparse\n",
        "\n",
        "https://blog.csdn.net/weixin_42421914/article/details/132875571"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KDy46FIQ6OWN"
      },
      "source": [
        "# Scaling Graph Neural Networks\n",
        "\n",
        "到目前为止,我们已经完全以全批量的方式为节点分类任务训练了图神经网络。特别是这意味着每个节点的隐表示是并行计算的,并可在下一层中重用。\n",
        "\n",
        "然而,一旦我们想在更大的图上操作,这种方案就不再可行,因为内存消耗会爆炸。\n",
        "例如,一个拥有大约1000万个节点和128维隐特征的图,在每个层面上就消耗大约**5GB的GPU内存**。因此,最近有一些努力让图神经网络扩展到更大的图上。这些方法之一被称为**Cluster-GCN** ([Chiang et al. (2019)](https://arxiv.org/abs/1905.07953)),它基于将图预分区为子图,可以在这些子图上以小批量的方式进行操作。\n",
        "\n",
        "为了演示,让我们从节点分类基准套件Planetoid ([Yang et al. (2016)](https://arxiv.org/abs/1603.08861))中加载`PubMed`图:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "eBN2pGDueDpZ",
        "outputId": "5596858d-260d-4963-c375-c15ef25881c0"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "Dataset: PubMed():\n",
            "==================\n",
            "Number of graphs: 1\n",
            "Number of features: 500\n",
            "Number of classes: 3\n",
            "\n",
            "Data(x=[19717, 500], edge_index=[2, 88648], y=[19717], train_mask=[19717], val_mask=[19717], test_mask=[19717])\n",
            "===============================================================================================================\n",
            "Number of nodes: 19717\n",
            "Number of edges: 88648\n",
            "Average node degree: 4.50\n",
            "Number of training nodes: 60\n",
            "Training node label rate: 0.003\n",
            "Has isolated nodes: False\n",
            "Has self-loops: False\n",
            "Is undirected: True\n"
          ]
        }
      ],
      "source": [
        "import torch\n",
        "from torch_geometric.datasets import Planetoid\n",
        "from torch_geometric.transforms import NormalizeFeatures\n",
        "\n",
        "dataset = Planetoid(root='data/Planetoid', name='PubMed', transform=NormalizeFeatures())\n",
        "\n",
        "print()\n",
        "print(f'Dataset: {dataset}:')\n",
        "print('==================')\n",
        "print(f'Number of graphs: {len(dataset)}')\n",
        "print(f'Number of features: {dataset.num_features}')\n",
        "print(f'Number of classes: {dataset.num_classes}')\n",
        "\n",
        "data = dataset[0]  # Get the first graph object.\n",
        "\n",
        "print()\n",
        "print(data)\n",
        "print('===============================================================================================================')\n",
        "\n",
        "# Gather some statistics about the graph.\n",
        "print(f'Number of nodes: {data.num_nodes}')\n",
        "print(f'Number of edges: {data.num_edges}')\n",
        "print(f'Average node degree: {data.num_edges / data.num_nodes:.2f}')\n",
        "print(f'Number of training nodes: {data.train_mask.sum()}')\n",
        "print(f'Training node label rate: {int(data.train_mask.sum()) / data.num_nodes:.3f}')\n",
        "print(f'Has isolated nodes: {data.has_isolated_nodes()}')\n",
        "print(f'Has self-loops: {data.has_self_loops()}')\n",
        "print(f'Is undirected: {data.is_undirected()}')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DEJtbHGs9JQ9"
      },
      "source": [
        "正如可以看到的,这个图有大约19,717个节点。虽然这些节点数量应该很容易放进GPU内存,但它仍然是一个很好的例子,演示了如何在PyTorch Geometric中扩展GNN。\n",
        "\n",
        "Cluster-GCN (Chiang et al. (2019) 的工作原理是首先根据图分区算法将图分区为子图。通过这种方式,GNN被限制在其特定的子图内进行卷积,这避免了邻域爆炸的问题。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zEWIwbHp-ofm"
      },
      "source": [
        "![Screen Shot 2020-08-27 at 14.50.03.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sp__L20M-qns"
      },
      "source": [
        "然而,在图被分区后,一些链接被删除,这可能会由于偏差估计而限制模型的性能。为了解决这个问题,Cluster-GCN还在一个小批量中并入了集群之间的链接,这导致了以下随机分区方案:"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0QfBQ_jj_24E"
      },
      "source": [
        "![Screen Shot 2020-08-27 at 14.58.15.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RwuwXWLZ_4OQ"
      },
      "source": [
        "这里,颜色代表了每个批次中维护的邻接信息(对于每个epoch可能不同)。\n",
        "\n",
        "PyTorch Geometric提供了Cluster-GCN算法的**两阶段实现**:\n",
        "1. [**`ClusterData`**](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.ClusterData) 将一个`Data`对象转换为包含`num_parts`个分区的子图数据集。\n",
        "2. 给定用户定义的`batch_size`,[**`ClusterLoader`**](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.ClusterLoader) 实现了随机分区方案,以创建小批量。\n",
        "\n",
        "然后制作小批量的过程如下:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "zwhcs1L5fPjx",
        "outputId": "182d20fd-57ef-41bf-ed19-a0910d0f83de"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Computing METIS partitioning...\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "Step 1:\n",
            "=======\n",
            "Number of nodes in the current batch: 4916\n",
            "Data(x=[4916, 500], y=[4916], train_mask=[4916], val_mask=[4916], test_mask=[4916], edge_index=[2, 16180])\n",
            "\n",
            "Step 2:\n",
            "=======\n",
            "Number of nodes in the current batch: 4909\n",
            "Data(x=[4909, 500], y=[4909], train_mask=[4909], val_mask=[4909], test_mask=[4909], edge_index=[2, 15912])\n",
            "\n",
            "Step 3:\n",
            "=======\n",
            "Number of nodes in the current batch: 4941\n",
            "Data(x=[4941, 500], y=[4941], train_mask=[4941], val_mask=[4941], test_mask=[4941], edge_index=[2, 15958])\n",
            "\n",
            "Step 4:\n",
            "=======\n",
            "Number of nodes in the current batch: 4951\n",
            "Data(x=[4951, 500], y=[4951], train_mask=[4951], val_mask=[4951], test_mask=[4951], edge_index=[2, 18662])\n",
            "\n",
            "Iterated over 19717 of 19717 nodes!\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Done!\n"
          ]
        }
      ],
      "source": [
        "from torch_geometric.loader import ClusterData, ClusterLoader\n",
        "\n",
        "torch.manual_seed(12345)\n",
        "cluster_data = ClusterData(data, num_parts=128)  # 1. Create subgraphs.\n",
        "train_loader = ClusterLoader(cluster_data, batch_size=32, shuffle=True)  # 2. Stochastic partioning scheme.\n",
        "\n",
        "print()\n",
        "total_num_nodes = 0\n",
        "for step, sub_data in enumerate(train_loader):\n",
        "    print(f'Step {step + 1}:')\n",
        "    print('=======')\n",
        "    print(f'Number of nodes in the current batch: {sub_data.num_nodes}')\n",
        "    print(sub_data)\n",
        "    print()\n",
        "    total_num_nodes += sub_data.num_nodes\n",
        "\n",
        "print(f'Iterated over {total_num_nodes} of {data.num_nodes} nodes!')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rv1wKy6VBOp1"
      },
      "source": [
        "在这里,我们将初始图划分为**128个分区**,并使用**32个子图的`batch_size`**来形成小批量(每轮留给我们4个批次)。\n",
        "如你所见,在单轮训练后,每个节点刚好被看到一次。\n",
        "\n",
        "Cluster-GCN的伟大之处在于它并没有使GNN模型实现更复杂。\n",
        "在这里,我们可以使用[此链接](https://colab.research.google.com/drive/14OvFnAXggxB8vM4e8vSURUp1TaKnovzX)中介绍的**完全相同的架构**。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "A1ho60LwfkHR",
        "outputId": "6d77312b-878c-49c6-d26c-588489d43b10"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "GCN(\n",
            "  (conv1): GCNConv(500, 16)\n",
            "  (conv2): GCNConv(16, 3)\n",
            ")\n"
          ]
        }
      ],
      "source": [
        "import torch.nn.functional as F\n",
        "from torch_geometric.nn import GCNConv\n",
        "\n",
        "class GCN(torch.nn.Module):\n",
        "    def __init__(self, hidden_channels):\n",
        "        super(GCN, self).__init__()\n",
        "        torch.manual_seed(12345)\n",
        "        self.conv1 = GCNConv(dataset.num_node_features, hidden_channels)\n",
        "        self.conv2 = GCNConv(hidden_channels, dataset.num_classes)\n",
        "\n",
        "    def forward(self, x, edge_index):\n",
        "        x = self.conv1(x, edge_index)\n",
        "        x = x.relu()\n",
        "        x = F.dropout(x, p=0.5, training=self.training)\n",
        "        x = self.conv2(x, edge_index)\n",
        "        return x\n",
        "\n",
        "model = GCN(hidden_channels=16)\n",
        "print(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o0JHjtY-CCkW"
      },
      "source": [
        "然后,这个图神经网络的训练与用于图分类任务的GNN训练非常相似。我们不再以全批量的方式在图上操作,而是**迭代每个小批量**,并**独立优化每个批次**:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 300
        },
        "id": "wLxawwYlgjDb",
        "outputId": "9240b29b-00d0-456b-d669-c9bf16480139"
      },
      "outputs": [
        {
          "data": {
            "application/javascript": "google.colab.output.setIframeHeight(0, true, {maxHeight: 300})",
            "text/plain": [
              "<IPython.core.display.Javascript object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Epoch: 001, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4070\n",
            "Epoch: 002, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4070\n",
            "Epoch: 003, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4070\n",
            "Epoch: 004, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4070\n",
            "Epoch: 005, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4070\n",
            "Epoch: 006, Train: 0.3333, Val Acc: 0.4160, Test Acc: 0.4100\n",
            "Epoch: 007, Train: 0.5000, Val Acc: 0.4820, Test Acc: 0.4730\n",
            "Epoch: 008, Train: 0.6333, Val Acc: 0.5140, Test Acc: 0.5240\n",
            "Epoch: 009, Train: 0.6833, Val Acc: 0.5420, Test Acc: 0.5440\n",
            "Epoch: 010, Train: 0.6667, Val Acc: 0.5320, Test Acc: 0.5240\n",
            "Epoch: 011, Train: 0.7667, Val Acc: 0.5700, Test Acc: 0.5660\n",
            "Epoch: 012, Train: 0.8333, Val Acc: 0.6600, Test Acc: 0.6630\n",
            "Epoch: 013, Train: 0.9333, Val Acc: 0.7060, Test Acc: 0.7210\n",
            "Epoch: 014, Train: 0.9500, Val Acc: 0.7200, Test Acc: 0.7250\n",
            "Epoch: 015, Train: 0.9167, Val Acc: 0.6740, Test Acc: 0.6770\n",
            "Epoch: 016, Train: 0.9000, Val Acc: 0.6580, Test Acc: 0.6530\n",
            "Epoch: 017, Train: 0.9500, Val Acc: 0.7160, Test Acc: 0.7130\n",
            "Epoch: 018, Train: 0.9667, Val Acc: 0.7600, Test Acc: 0.7680\n",
            "Epoch: 019, Train: 0.9667, Val Acc: 0.7680, Test Acc: 0.7720\n",
            "Epoch: 020, Train: 0.9667, Val Acc: 0.7720, Test Acc: 0.7640\n",
            "Epoch: 021, Train: 0.9667, Val Acc: 0.7640, Test Acc: 0.7620\n",
            "Epoch: 022, Train: 0.9833, Val Acc: 0.7840, Test Acc: 0.7710\n",
            "Epoch: 023, Train: 0.9833, Val Acc: 0.7840, Test Acc: 0.7740\n",
            "Epoch: 024, Train: 0.9833, Val Acc: 0.7800, Test Acc: 0.7800\n",
            "Epoch: 025, Train: 0.9667, Val Acc: 0.7760, Test Acc: 0.7740\n",
            "Epoch: 026, Train: 0.9833, Val Acc: 0.7760, Test Acc: 0.7770\n",
            "Epoch: 027, Train: 0.9833, Val Acc: 0.7780, Test Acc: 0.7710\n",
            "Epoch: 028, Train: 0.9833, Val Acc: 0.7700, Test Acc: 0.7770\n",
            "Epoch: 029, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7820\n",
            "Epoch: 030, Train: 0.9833, Val Acc: 0.7900, Test Acc: 0.7790\n",
            "Epoch: 031, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7860\n",
            "Epoch: 032, Train: 0.9833, Val Acc: 0.8020, Test Acc: 0.7930\n",
            "Epoch: 033, Train: 0.9833, Val Acc: 0.7980, Test Acc: 0.7900\n",
            "Epoch: 034, Train: 0.9833, Val Acc: 0.7980, Test Acc: 0.7900\n",
            "Epoch: 035, Train: 0.9833, Val Acc: 0.7900, Test Acc: 0.7960\n",
            "Epoch: 036, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7920\n",
            "Epoch: 037, Train: 0.9833, Val Acc: 0.8000, Test Acc: 0.7900\n",
            "Epoch: 038, Train: 0.9833, Val Acc: 0.8060, Test Acc: 0.7880\n",
            "Epoch: 039, Train: 0.9833, Val Acc: 0.8060, Test Acc: 0.7870\n",
            "Epoch: 040, Train: 0.9833, Val Acc: 0.8020, Test Acc: 0.7900\n",
            "Epoch: 041, Train: 0.9833, Val Acc: 0.7920, Test Acc: 0.7890\n",
            "Epoch: 042, Train: 0.9833, Val Acc: 0.7920, Test Acc: 0.7880\n",
            "Epoch: 043, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7850\n",
            "Epoch: 044, Train: 0.9833, Val Acc: 0.8000, Test Acc: 0.7930\n",
            "Epoch: 045, Train: 0.9833, Val Acc: 0.7920, Test Acc: 0.7880\n",
            "Epoch: 046, Train: 0.9833, Val Acc: 0.7980, Test Acc: 0.7900\n",
            "Epoch: 047, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7930\n",
            "Epoch: 048, Train: 0.9833, Val Acc: 0.7900, Test Acc: 0.7910\n",
            "Epoch: 049, Train: 0.9833, Val Acc: 0.7960, Test Acc: 0.7900\n",
            "Epoch: 050, Train: 0.9833, Val Acc: 0.7940, Test Acc: 0.7940\n"
          ]
        }
      ],
      "source": [
        "from IPython.display import Javascript\n",
        "display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 300})'''))\n",
        "\n",
        "model = GCN(hidden_channels=16)\n",
        "optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)\n",
        "criterion = torch.nn.CrossEntropyLoss()\n",
        "\n",
        "def train():\n",
        "      model.train()\n",
        "\n",
        "      for sub_data in train_loader:  # Iterate over each mini-batch.\n",
        "          out = model(sub_data.x, sub_data.edge_index)  # Perform a single forward pass.\n",
        "          loss = criterion(out[sub_data.train_mask], sub_data.y[sub_data.train_mask])  # Compute the loss solely based on the training nodes.\n",
        "          loss.backward()  # Derive gradients.\n",
        "          optimizer.step()  # Update parameters based on gradients.\n",
        "          optimizer.zero_grad()  # Clear gradients.\n",
        "\n",
        "def test():\n",
        "      model.eval()\n",
        "      out = model(data.x, data.edge_index)\n",
        "      pred = out.argmax(dim=1)  # Use the class with highest probability.\n",
        "\n",
        "      accs = []\n",
        "      for mask in [data.train_mask, data.val_mask, data.test_mask]:\n",
        "          correct = pred[mask] == data.y[mask]  # Check against ground-truth labels.\n",
        "          accs.append(int(correct.sum()) / int(mask.sum()))  # Derive ratio of correct predictions.\n",
        "      return accs\n",
        "\n",
        "for epoch in range(1, 51):\n",
        "    loss = train()\n",
        "    train_acc, val_acc, test_acc = test()\n",
        "    print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, Val Acc: {val_acc:.4f}, Test Acc: {test_acc:.4f}')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SDOmdUe0C3U1"
      },
      "source": [
        "## 结论\n",
        "\n",
        "在本章中,你已经了解了一种将GNN扩展到大图上的方法,否则这些大图无法装入GPU内存。\n",
        "\n",
        "这也总结了使用PyTorch Geometric进行**深度图学习**的动手教程。\n",
        "如果你想了解更多关于GNN或PyTorch Geometric的信息,欢迎查看**[PyG的文档](https://pytorch-geometric.readthedocs.io/en/latest/?badge=latest)**,**[它实现的方法列表](https://github.com/rusty1s/pytorch_geometric)** 以及 **[它提供的示例](https://github.com/rusty1s/pytorch_geometric/tree/master/examples)**,这些示例涵盖了额外的主题,比如**链接预测**、**图注意力**、**网格或点云卷积**,以及**其他扩展GNN的方法**。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.9.16"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
