{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 基础MessagePassing类使用"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 消息传递神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "公式：\n",
    "$$\n",
    "\\mathbf{x}_i^{(k)} = \\gamma^{(k)}(\\mathbf{x}_i^{(k-1)},\\square_{j\\in\\mathcal{N}(i)}\\phi^{(k)}(\\mathbf{x}_i^{(k-1)},\\mathbf{x}_j^{(k-1)},e_{j,i})),\n",
    "$$\n",
    "\n",
    "其中$x_i^{(k-1)}\\in \\mathbb{R}^F$代表节点特征，$e_{j,i}$代表$节点j到节点i$的边的特征，$\\square$代表可微的，置换不变的函数，例如sum、mean和max，$\\gamma和\\phi$表示可微函数，例如MLP。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## MessagePassing类"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用时只需要定义函数$\\phi$，例如<kbd>message()</kbd>，和$\\gamma$，例如<kbd>update()</kbd>，和聚合模式，例如<kbd>aggr=\"add\"</kbd>，<kbd>aggr=\"mean\"</kbd>，<kbd>aggr=\"max\"</kbd>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### MessagePassing(aggr=\"add\", flow=\"source_to_target\", node_dim=-2)\n",
    "aggr定义聚合模式(\"add\", \"mean\" or \"max\")  \n",
    "flow定义消息传递方向(\"source_to_target\" 或 \"target_to_source\")  \n",
    "node_dim定义传播的向量维度\n",
    "+ ### MessagePassing.propagate(edge_index, size=None, **kwargs)  \n",
    "用于聚合信息。接收边和所有附加数据来构建信息和更新节点嵌入。\n",
    "+ ### MessagePassing.message(...)\n",
    "构建传播的消息，是公式中的$\\phi$部分，如果flaw=\"source_to_target\",节点node $i$接收边$(j,i)\\in \\mathcal{E}$的消息，如果flaw=\"target_to_source\"，节点node $i$则接收边$(i,j)\\in \\mathcal{E}$的信息。\n",
    "+ ### MessagePassing.update(aggr_out,...)\n",
    "更新节点嵌入，是公式中的$\\gamma$部分。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 流程\n",
    "首先调用<kbd>propagate()</kbd>，它会内部调用<kbd>message()</kbd>,<kbd>aggregate()</kbd>和<kbd>update()</kbd>函数。 \n",
    "\n",
    "#### 这里以GCN为例  \n",
    "将节点特征和边索引和额外信息传入<kbd>propagate()</kbd>，例如GCN的归一化邻接矩阵$norm$，<kbd>message()</kbd>，<kbd>update()</kbd>也同样可以持有这些额外信息，只需要在函数定义时使用一样的变量名，此操作使用\\*\\*kwargs传递变量字典实现。\n",
    "\n",
    "在<kbd>message()</kbd>函数中，会构建在每条边上传播的信息，如头节点的特征。在GCN中，需要用$norm$归一化$x_j$。$x_j$表示边上需要被传播的节点特征，$x_i$表示要传播到的节点特征，在“source_to_target”模式下，$x_j$为“source”节点，即每条边起始节点的特征，$x_i$为”target“节点，即每条边结束节点的特征，“target_to_source”则相反，但$x_j$始终与要传播的特征是一致的。实际上，每个向量都可以通过这个方法获取特征，只要函数持有头和尾节点特征。  \n",
    "\n",
    "<kbd>aggregate()</kbd>，用于聚合所有传播到节点上的信息，<kbd>MessagePassing()</kbd>的属性<kbd>aggr</kbd>作用于<kbd>aggregate()</kbd>，决定聚合方法。\n",
    "\n",
    "<kbd>update()</kbd>，接收<kbd>aggregate()</kbd>的返回<kbd>aggr_out</kbd>，用于更新节点嵌入。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 简易GCN层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-06-10T15:19:42.156209Z",
     "start_time": "2021-06-10T15:19:42.141488Z"
    }
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch_geometric.nn import MessagePassing\n",
    "from torch_geometric.utils import add_self_loops, degree\n",
    "\n",
    "class GCNConv(MessagePassing):\n",
    "    def __init__(self, in_channels, out_channels):\n",
    "        super(GCNConv, self).__init__(aggr='add',flow=\"source_to_target\")  # \"Add\" aggregation (Step 5).\n",
    "        self.lin = torch.nn.Linear(in_channels, out_channels)\n",
    "\n",
    "    def forward(self, x, edge_index):\n",
    "        # x has shape [N, in_channels]\n",
    "        # edge_index has shape [2, E]\n",
    "\n",
    "        # Step 1: Add self-loops to the adjacency matrix.\n",
    "        edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))\n",
    "\n",
    "        # Step 2: Linearly transform node feature matrix.\n",
    "#         x = self.lin(x)\n",
    "\n",
    "        # Step 3: Compute normalization.\n",
    "        row, col = edge_index\n",
    "        deg = degree(col, x.size(0), dtype=x.dtype)\n",
    "        deg_inv_sqrt = deg.pow(-0.5)\n",
    "        norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]\n",
    "\n",
    "        # Step 4-5: Start propagating messages.\n",
    "        a = self.propagate(edge_index, x=x, norm=norm)\n",
    "        print(\"propagate:\",a)\n",
    "        return a\n",
    "\n",
    "    def message(self, x_i, x_j, norm):\n",
    "        # x_j has shape [E, out_channels]\n",
    "\n",
    "        # Step 4: Normalize node features.\n",
    "        print(\"norm-1,1\",norm.view(-1, 1))\n",
    "        print(\"x_j\",x_j)\n",
    "        print(\"x_i\",x_i)\n",
    "        return norm.view(-1, 1) * x_j"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-06-10T15:19:42.451316Z",
     "start_time": "2021-06-10T15:19:42.445272Z"
    }
   },
   "outputs": [],
   "source": [
    "n = torch.FloatTensor([[1,1,1],[2,2,2],[3,3,3]])\n",
    "e = torch.LongTensor([[0,1],[1,2]]).t()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-06-10T15:19:42.820374Z",
     "start_time": "2021-06-10T15:19:42.815643Z"
    }
   },
   "outputs": [],
   "source": [
    "from torch_geometric.data import Data\n",
    "data = Data(x=n, edge_index=e.contiguous())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-06-10T15:19:43.199108Z",
     "start_time": "2021-06-10T15:19:43.193988Z"
    }
   },
   "outputs": [],
   "source": [
    "conv = GCNConv(3,3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-06-10T15:19:43.629672Z",
     "start_time": "2021-06-10T15:19:43.617101Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "norm-1,1 tensor([[0.7071],\n",
      "        [0.5000],\n",
      "        [1.0000],\n",
      "        [0.5000],\n",
      "        [0.5000]])\n",
      "x_j tensor([[2., 2., 2.],\n",
      "        [3., 3., 3.],\n",
      "        [1., 1., 1.],\n",
      "        [2., 2., 2.],\n",
      "        [3., 3., 3.]])\n",
      "x_i tensor([[1., 1., 1.],\n",
      "        [2., 2., 2.],\n",
      "        [1., 1., 1.],\n",
      "        [2., 2., 2.],\n",
      "        [3., 3., 3.]])\n",
      "propagate: tensor([[2.4142, 2.4142, 2.4142],\n",
      "        [2.5000, 2.5000, 2.5000],\n",
      "        [1.5000, 1.5000, 1.5000]])\n",
      "result: tensor([[2.4142, 2.4142, 2.4142],\n",
      "        [2.5000, 2.5000, 2.5000],\n",
      "        [1.5000, 1.5000, 1.5000]])\n"
     ]
    }
   ],
   "source": [
    "print(\"result:\",conv(data.x,data.edge_index))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## GCN源码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Optional, Tuple\n",
    "from torch_geometric.typing import Adj, OptTensor, PairTensor\n",
    "\n",
    "import torch\n",
    "from torch import Tensor\n",
    "from torch.nn import Parameter\n",
    "from torch_scatter import scatter_add\n",
    "from torch_sparse import SparseTensor, matmul, fill_diag, sum, mul\n",
    "from torch_geometric.nn.conv import MessagePassing\n",
    "from torch_geometric.utils import add_remaining_self_loops\n",
    "from torch_geometric.utils.num_nodes import maybe_num_nodes\n",
    "\n",
    "from ..inits import glorot, zeros\n",
    "\n",
    "\n",
    "@torch.jit._overload\n",
    "def gcn_norm(edge_index, edge_weight=None, num_nodes=None, improved=False,\n",
    "             add_self_loops=True, dtype=None):\n",
    "    # type: (Tensor, OptTensor, Optional[int], bool, bool, Optional[int]) -> PairTensor  # noqa\n",
    "    pass\n",
    "\n",
    "\n",
    "@torch.jit._overload\n",
    "def gcn_norm(edge_index, edge_weight=None, num_nodes=None, improved=False,\n",
    "             add_self_loops=True, dtype=None):\n",
    "    # type: (SparseTensor, OptTensor, Optional[int], bool, bool, Optional[int]) -> SparseTensor  # noqa\n",
    "    pass\n",
    "\n",
    "\n",
    "# 归一化邻接矩阵\n",
    "def gcn_norm(edge_index, edge_weight=None, num_nodes=None, improved=False,\n",
    "             add_self_loops=True, dtype=None):\n",
    "    \n",
    "    # 对角线所填值\n",
    "    fill_value = 2. if improved else 1.\n",
    "\n",
    "    # 如果edge_index是稀疏矩阵SparseTensor对象\n",
    "    if isinstance(edge_index, SparseTensor):\n",
    "        adj_t = edge_index\n",
    "        if not adj_t.has_value():\n",
    "            adj_t = adj_t.fill_value(1., dtype=dtype)\n",
    "        if add_self_loops:\n",
    "            adj_t = fill_diag(adj_t, fill_value)\n",
    "        deg = sum(adj_t, dim=1)\n",
    "        deg_inv_sqrt = deg.pow_(-0.5)\n",
    "        deg_inv_sqrt.masked_fill_(deg_inv_sqrt == float('inf'), 0.)\n",
    "        adj_t = mul(adj_t, deg_inv_sqrt.view(-1, 1))\n",
    "        adj_t = mul(adj_t, deg_inv_sqrt.view(1, -1))\n",
    "        return adj_t\n",
    "\n",
    "    # 不是SparseTensor对象\n",
    "    else:\n",
    "        # 获取节点数\n",
    "        num_nodes = maybe_num_nodes(edge_index, num_nodes)\n",
    "\n",
    "        # 如果没有edge_weight，全置为1\n",
    "        if edge_weight is None:\n",
    "            edge_weight = torch.ones((edge_index.size(1), ), dtype=dtype,\n",
    "                                     device=edge_index.device)\n",
    "\n",
    "        # 添加自环，如果imporved为真，自环权重为2，否则为1\n",
    "        if add_self_loops:\n",
    "            edge_index, tmp_edge_weight = add_remaining_self_loops(\n",
    "                edge_index, edge_weight, fill_value, num_nodes)\n",
    "            assert tmp_edge_weight is not None\n",
    "            edge_weight = tmp_edge_weight\n",
    "\n",
    "        # \n",
    "        row, col = edge_index[0], edge_index[1]\n",
    "        deg = scatter_add(edge_weight, col, dim=0, dim_size=num_nodes)\n",
    "        deg_inv_sqrt = deg.pow_(-0.5)\n",
    "        deg_inv_sqrt.masked_fill_(deg_inv_sqrt == float('inf'), 0)\n",
    "        return edge_index, deg_inv_sqrt[row] * edge_weight * deg_inv_sqrt[col]\n",
    "\n",
    "\n",
    "class GCNConv(MessagePassing):\n",
    "    r\"\"\"The graph convolutional operator from the `\"Semi-supervised\n",
    "    Classification with Graph Convolutional Networks\"\n",
    "    <https://arxiv.org/abs/1609.02907>`_ paper\n",
    "\n",
    "    .. math::\n",
    "        \\mathbf{X}^{\\prime} = \\mathbf{\\hat{D}}^{-1/2} \\mathbf{\\hat{A}}\n",
    "        \\mathbf{\\hat{D}}^{-1/2} \\mathbf{X} \\mathbf{\\Theta},\n",
    "\n",
    "    where :math:`\\mathbf{\\hat{A}} = \\mathbf{A} + \\mathbf{I}` denotes the\n",
    "    adjacency matrix with inserted self-loops and\n",
    "    :math:`\\hat{D}_{ii} = \\sum_{j=0} \\hat{A}_{ij}` its diagonal degree matrix.\n",
    "    The adjacency matrix can include other values than :obj:`1` representing\n",
    "    edge weights via the optional :obj:`edge_weight` tensor.\n",
    "\n",
    "    Its node-wise formulation is given by:\n",
    "\n",
    "    .. math::\n",
    "        \\mathbf{x}^{\\prime}_i = \\mathbf{\\Theta} \\sum_{j \\in \\mathcal{N}(v) \\cup\n",
    "        \\{ i \\}} \\frac{e_{j,i}}{\\sqrt{\\hat{d}_j \\hat{d}_i}} \\mathbf{x}_j\n",
    "\n",
    "    with :math:`\\hat{d}_i = 1 + \\sum_{j \\in \\mathcal{N}(i)} e_{j,i}`, where\n",
    "    :math:`e_{j,i}` denotes the edge weight from source node :obj:`j` to target\n",
    "    node :obj:`i` (default: :obj:`1.0`)\n",
    "\n",
    "    Args:\n",
    "        in_channels (int): Size of each input sample.\n",
    "        out_channels (int): Size of each output sample.\n",
    "        improved (bool, optional): If set to :obj:`True`, the layer computes\n",
    "            :math:`\\mathbf{\\hat{A}}` as :math:`\\mathbf{A} + 2\\mathbf{I}`.\n",
    "            (default: :obj:`False`)\n",
    "        cached (bool, optional): If set to :obj:`True`, the layer will cache\n",
    "            the computation of :math:`\\mathbf{\\hat{D}}^{-1/2} \\mathbf{\\hat{A}}\n",
    "            \\mathbf{\\hat{D}}^{-1/2}` on first execution, and will use the\n",
    "            cached version for further executions.\n",
    "            This parameter should only be set to :obj:`True` in transductive\n",
    "            learning scenarios. (default: :obj:`False`)\n",
    "        add_self_loops (bool, optional): If set to :obj:`False`, will not add\n",
    "            self-loops to the input graph. (default: :obj:`True`)\n",
    "        normalize (bool, optional): Whether to add self-loops and compute\n",
    "            symmetric normalization coefficients on the fly.\n",
    "            (default: :obj:`True`)\n",
    "        bias (bool, optional): If set to :obj:`False`, the layer will not learn\n",
    "            an additive bias. (default: :obj:`True`)\n",
    "        **kwargs (optional): Additional arguments of\n",
    "            :class:`torch_geometric.nn.conv.MessagePassing`.\n",
    "    \"\"\"\n",
    "\n",
    "    \"\"\"\n",
    "    参数：\n",
    "        in_channels (int): 输入维度\n",
    "        out_channels (int): 输出维度\n",
    "        improved (bool, optional): 如果置为真，在计算邻接矩阵时，使用A+2I，默认为假\n",
    "        cached (bool, optional): \n",
    "            如果置为真，在第一次计算DAD时会送入缓存，用于后续的计算，可以节约运行时间，在transductive场景下，只应该被置为真，默认为假。\n",
    "        add_self_loops (bool, optional): \n",
    "            是否在计算时添加自环.默认为真。由于使用的是add_remaining_self_loops，只会添加缺的自环，不会全重新添加一遍。\n",
    "        normalize (bool, optional): \n",
    "            是否添加自循环并即时计算对称归一化系数.默认为假\n",
    "        bias (bool, optional): \n",
    "            如果设置为假，则不会学习到偏置，默认为真\n",
    "    \"\"\"\n",
    "    \n",
    "\n",
    "    def __init__(self, in_channels: int, out_channels: int,\n",
    "                 improved: bool = False, cached: bool = False,\n",
    "                 add_self_loops: bool = True, normalize: bool = True,\n",
    "                 bias: bool = True, **kwargs):\n",
    "\n",
    "        kwargs.setdefault('aggr', 'add')\n",
    "        super(GCNConv, self).__init__(**kwargs)\n",
    "\n",
    "        self.in_channels = in_channels\n",
    "        self.out_channels = out_channels\n",
    "        self.improved = improved\n",
    "        self.cached = cached\n",
    "        self.add_self_loops = add_self_loops\n",
    "        self.normalize = normalize\n",
    "\n",
    "        self._cached_edge_index = None\n",
    "        self._cached_adj_t = None\n",
    "\n",
    "        self.weight = Parameter(torch.Tensor(in_channels, out_channels))\n",
    "\n",
    "        if bias:\n",
    "            self.bias = Parameter(torch.Tensor(out_channels))\n",
    "        else:\n",
    "            self.register_parameter('bias', None)\n",
    "\n",
    "        self.reset_parameters()\n",
    "\n",
    "    def reset_parameters(self):\n",
    "        glorot(self.weight)\n",
    "        zeros(self.bias)\n",
    "        self._cached_edge_index = None\n",
    "        self._cached_adj_t = None\n",
    "\n",
    "\n",
    "    def forward(self, x: Tensor, edge_index: Adj,\n",
    "                edge_weight: OptTensor = None) -> Tensor:\n",
    "\n",
    "        if self.normalize:\n",
    "            if isinstance(edge_index, Tensor):\n",
    "                cache = self._cached_edge_index\n",
    "                if cache is None:\n",
    "                    edge_index, edge_weight = gcn_norm(  # yapf: disable\n",
    "                        edge_index, edge_weight, x.size(self.node_dim),\n",
    "                        self.improved, self.add_self_loops)\n",
    "                    if self.cached:\n",
    "                        self._cached_edge_index = (edge_index, edge_weight)\n",
    "                else:\n",
    "                    edge_index, edge_weight = cache[0], cache[1]\n",
    "\n",
    "            elif isinstance(edge_index, SparseTensor):\n",
    "                cache = self._cached_adj_t\n",
    "                if cache is None:\n",
    "                    edge_index = gcn_norm(  # yapf: disable\n",
    "                        edge_index, edge_weight, x.size(self.node_dim),\n",
    "                        self.improved, self.add_self_loops)\n",
    "                    if self.cached:\n",
    "                        self._cached_adj_t = edge_index\n",
    "                else:\n",
    "                    edge_index = cache\n",
    "\n",
    "        x = x @ self.weight\n",
    "\n",
    "        # propagate_type: (x: Tensor, edge_weight: OptTensor)\n",
    "        out = self.propagate(edge_index, x=x, edge_weight=edge_weight,\n",
    "                             size=None)\n",
    "\n",
    "        if self.bias is not None:\n",
    "            out += self.bias\n",
    "\n",
    "        return out\n",
    "\n",
    "\n",
    "    def message(self, x_j: Tensor, edge_weight: OptTensor) -> Tensor:\n",
    "        return x_j if edge_weight is None else edge_weight.view(-1, 1) * x_j\n",
    "\n",
    "    def message_and_aggregate(self, adj_t: SparseTensor, x: Tensor) -> Tensor:\n",
    "        return matmul(adj_t, x, reduce=self.aggr)\n",
    "\n",
    "    def __repr__(self):\n",
    "        return '{}({}, {})'.format(self.__class__.__name__, self.in_channels,\n",
    "                                   self.out_channels)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "song",
   "language": "python",
   "name": "song"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.13"
  },
  "latex_envs": {
   "LaTeX_envs_menu_present": true,
   "autoclose": false,
   "autocomplete": true,
   "bibliofile": "biblio.bib",
   "cite_by": "apalike",
   "current_citInitial": 1,
   "eqLabelWithNumbers": true,
   "eqNumInitial": 1,
   "hotkeys": {
    "equation": "Ctrl-E",
    "itemize": "Ctrl-I"
   },
   "labels_anchors": false,
   "latex_user_defs": false,
   "report_style_numbering": false,
   "user_envs_cfg": false
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
