{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# torch2onnx：`Scatter`\n",
    "\n",
    "参考：[Scatter](https://onnx.ai/onnx/operators/onnx__Scatter.html#scatter)\n",
    "\n",
    "```{warning}\n",
    "从版本 11 开始，`Scatter` 算子已被弃用，请使用 `ScatterElements`，它提供了相同的功能。\n",
    "```\n",
    "\n",
    "`Scatter` 接受三个输入 `data`、`updates` 和 `indices`，它们的秩 `r>=1`，以及可选的属性轴，用于标识 `data` 的轴（默认为最外层轴，即轴 `0`）。算子的输出是通过创建输入 `data` 的副本，然后根据由 `indices` 指定的特定索引位置更新其值来生成的。它的输出形状与数据的形状相同。\n",
    "\n",
    "对于 `updates` 中的每个条目，通过将 `indices` 中相应的条目与该条目本身的索引组合来获取 `data` 中的目标索引：维度 `=axis` 的索引值从 `indices` 中相应条目的值获得，维度 `！=axis` 的索引值从该条目本身的索引获得。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在二维张量的情况下，对应于 `[i][j]` 条目的更新如下进行：\n",
    "\n",
    "```python\n",
    "output[indices[i][j]][j] = updates[i][j] if axis = 0,\n",
    "output[i][indices[i][j]] = updates[i][j] if axis = 1,\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "该算子是 `GatherElements` 的逆运输。它类似于 Torch 的 Scatter 算子。\n",
    "\n",
    "示例1：\n",
    "\n",
    "```python\n",
    "data = [\n",
    "    [0.0, 0.0, 0.0],\n",
    "    [0.0, 0.0, 0.0],\n",
    "    [0.0, 0.0, 0.0],\n",
    "]\n",
    "indices = [\n",
    "    [1, 0, 2],\n",
    "    [0, 2, 1],\n",
    "]\n",
    "updates = [\n",
    "    [1.0, 1.1, 1.2],\n",
    "    [2.0, 2.1, 2.2],\n",
    "]\n",
    "output = [\n",
    "    [2.0, 1.1, 0.0]\n",
    "    [1.0, 0.0, 2.2]\n",
    "    [0.0, 2.1, 1.2]\n",
    "]\n",
    "```\n",
    "\n",
    "示例2：\n",
    "\n",
    "```python\n",
    "data = [[1.0, 2.0, 3.0, 4.0, 5.0]]\n",
    "indices = [[1, 3]]\n",
    "updates = [[1.1, 2.1]]\n",
    "axis = 1\n",
    "output = [[1.0, 1.1, 3.0, 2.1, 5.0]]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 可选属性 `axis`：在哪个轴上进行扩散。负值意味着从后面开始计算维度。可接受的范围是 $[-r, r-1]$，其中 `r = rank(data)`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;31mDocstring:\u001b[0m\n",
      "scatter_(dim, index, src, reduce=None) -> Tensor\n",
      "\n",
      "Writes all values from the tensor :attr:`src` into :attr:`self` at the indices\n",
      "specified in the :attr:`index` tensor. For each value in :attr:`src`, its output\n",
      "index is specified by its index in :attr:`src` for ``dimension != dim`` and by\n",
      "the corresponding value in :attr:`index` for ``dimension = dim``.\n",
      "\n",
      "For a 3-D tensor, :attr:`self` is updated as::\n",
      "\n",
      "    self[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0\n",
      "    self[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1\n",
      "    self[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2\n",
      "\n",
      "This is the reverse operation of the manner described in :meth:`~Tensor.gather`.\n",
      "\n",
      ":attr:`self`, :attr:`index` and :attr:`src` (if it is a Tensor) should all have\n",
      "the same number of dimensions. It is also required that\n",
      "``index.size(d) <= src.size(d)`` for all dimensions ``d``, and that\n",
      "``index.size(d) <= self.size(d)`` for all dimensions ``d != dim``.\n",
      "Note that ``index`` and ``src`` do not broadcast.\n",
      "\n",
      "Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be\n",
      "between ``0`` and ``self.size(dim) - 1`` inclusive.\n",
      "\n",
      ".. warning::\n",
      "\n",
      "    When indices are not unique, the behavior is non-deterministic (one of the\n",
      "    values from ``src`` will be picked arbitrarily) and the gradient will be\n",
      "    incorrect (it will be propagated to all locations in the source that\n",
      "    correspond to the same index)!\n",
      "\n",
      ".. note::\n",
      "\n",
      "    The backward pass is implemented only for ``src.shape == index.shape``.\n",
      "\n",
      "Additionally accepts an optional :attr:`reduce` argument that allows\n",
      "specification of an optional reduction operation, which is applied to all\n",
      "values in the tensor :attr:`src` into :attr:`self` at the indices\n",
      "specified in the :attr:`index`. For each value in :attr:`src`, the reduction\n",
      "operation is applied to an index in :attr:`self` which is specified by\n",
      "its index in :attr:`src` for ``dimension != dim`` and by the corresponding\n",
      "value in :attr:`index` for ``dimension = dim``.\n",
      "\n",
      "Given a 3-D tensor and reduction using the multiplication operation, :attr:`self`\n",
      "is updated as::\n",
      "\n",
      "    self[index[i][j][k]][j][k] *= src[i][j][k]  # if dim == 0\n",
      "    self[i][index[i][j][k]][k] *= src[i][j][k]  # if dim == 1\n",
      "    self[i][j][index[i][j][k]] *= src[i][j][k]  # if dim == 2\n",
      "\n",
      "Reducing with the addition operation is the same as using\n",
      ":meth:`~torch.Tensor.scatter_add_`.\n",
      "\n",
      ".. warning::\n",
      "    The reduce argument with Tensor ``src`` is deprecated and will be removed in\n",
      "    a future PyTorch release. Please use :meth:`~torch.Tensor.scatter_reduce_`\n",
      "    instead for more reduction options.\n",
      "\n",
      "Args:\n",
      "    dim (int): the axis along which to index\n",
      "    index (LongTensor): the indices of elements to scatter, can be either empty\n",
      "        or of the same dimensionality as ``src``. When empty, the operation\n",
      "        returns ``self`` unchanged.\n",
      "    src (Tensor or float): the source element(s) to scatter.\n",
      "    reduce (str, optional): reduction operation to apply, can be either\n",
      "        ``'add'`` or ``'multiply'``.\n",
      "\n",
      "Example::\n",
      "\n",
      "    >>> src = torch.arange(1, 11).reshape((2, 5))\n",
      "    >>> src\n",
      "    tensor([[ 1,  2,  3,  4,  5],\n",
      "            [ 6,  7,  8,  9, 10]])\n",
      "    >>> index = torch.tensor([[0, 1, 2, 0]])\n",
      "    >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)\n",
      "    tensor([[1, 0, 0, 4, 0],\n",
      "            [0, 2, 0, 0, 0],\n",
      "            [0, 0, 3, 0, 0]])\n",
      "    >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])\n",
      "    >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)\n",
      "    tensor([[1, 2, 3, 0, 0],\n",
      "            [6, 7, 0, 0, 8],\n",
      "            [0, 0, 0, 0, 0]])\n",
      "\n",
      "    >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n",
      "    ...            1.23, reduce='multiply')\n",
      "    tensor([[2.0000, 2.0000, 2.4600, 2.0000],\n",
      "            [2.0000, 2.0000, 2.0000, 2.4600]])\n",
      "    >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n",
      "    ...            1.23, reduce='add')\n",
      "    tensor([[2.0000, 2.0000, 3.2300, 2.0000],\n",
      "            [2.0000, 2.0000, 2.0000, 3.2300]])\n",
      "\u001b[0;31mType:\u001b[0m      method_descriptor"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "torch.Tensor.scatter_?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "xin",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
