{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.1 Tenosr\n",
    "## 基本Tensor操作"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch as t"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.]])\n",
      "tensor([[1., 2., 3.],\n",
      "        [4., 5., 6.]])\n",
      "[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]\n"
     ]
    }
   ],
   "source": [
    "a=t.Tensor(2,3)   # 指定形状Tensor\n",
    "b=t.Tensor([[1,2,3],[4,5,6]])   # list→Tensor\n",
    "c=b.tolist()    #Tensor→list\n",
    "print(a)\n",
    "print(b)\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 3])\n",
      "torch.Size([2, 3])\n",
      "6\n",
      "6\n"
     ]
    }
   ],
   "source": [
    "b_size=b.size()\n",
    "print(b_size)   #Tensor.size()返回Torch.size()对象\n",
    "print(b.shape)  #tensor.shape可以直接达到和tensor.size()相同的效果，但它不是方法，不用加括号\n",
    "print(b.numel())\n",
    "print(b.nelement())   #numel() 和nelement()作用相同"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.]])\n",
      "tensor([2., 3.])\n"
     ]
    }
   ],
   "source": [
    "c=t.Tensor(b_size)   #既然b_size为 Torch.size()对象，则可以用做确定Tensor大小的参数\n",
    "d=t.Tensor((2,3))    #注意和a=t.Tensor(2,3)区别\n",
    "print(c)\n",
    "print(d)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 其他创建Tensor的方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 1., 1.],\n",
      "        [1., 1., 1.]])\n",
      "tensor([[0., 0., 0.],\n",
      "        [0., 0., 0.]])\n",
      "tensor([1, 3, 5, 7])\n",
      "tensor([ 1.0000,  5.5000, 10.0000])\n",
      "tensor([[-1.0081,  0.5772,  0.3315],\n",
      "        [-0.7035,  1.0028, -0.3037]])\n",
      "tensor([[0.4117, 0.1650, 0.1305],\n",
      "        [0.6868, 0.0379, 0.0322]])\n",
      "tensor([0, 2, 3, 1, 4])\n",
      "tensor([[1., 0., 0.],\n",
      "        [0., 1., 0.]])\n"
     ]
    }
   ],
   "source": [
    "print(t.ones(2,3))\n",
    "print(t.zeros(2,3))\n",
    "print(t.arange(1,8,2))   #从1到8，每次步长为2\n",
    "print(t.linspace(1,10,3))#1到10，分为3部分\n",
    "print(t.randn(2,3))  #标准正态分布\n",
    "print(t.rand(2,3))   #均匀分布\n",
    "print(t.randperm(5))  #长度为5的随机排列\n",
    "print(t.eye(2,3))  #对角线为1，不要求行列一致"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 常用Tensor操作\n",
    "通过tensor.view()方法可以调整tensor的形状，比如将1行6列的数据调整为2行三列，但view操作不会改变计算机存储数据的方式，只是输出时的读取方式不同，view之后的新tensor与原tensor共享统一内存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0, 1, 2, 3, 4, 5])\n",
      "tensor([[0, 1, 2],\n",
      "        [3, 4, 5]])\n",
      "tensor([[0, 1, 2],\n",
      "        [3, 4, 5]])\n"
     ]
    }
   ],
   "source": [
    "a=t.arange(0,6)\n",
    "print(a)\n",
    "a=a.view(2,3)\n",
    "print(a)\n",
    "b=a.view(-1,3)  #-1表示按另一维度自动计算大小\n",
    "print(b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "unsqueeze()和squeeze()用于改变数据的维度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[0, 1, 2],\n",
      "         [3, 4, 5]]])\n",
      "\n",
      " tensor([[[0, 1, 2]],\n",
      "\n",
      "        [[3, 4, 5]]])\n",
      "\n",
      " tensor([[[0],\n",
      "         [1],\n",
      "         [2]],\n",
      "\n",
      "        [[3],\n",
      "         [4],\n",
      "         [5]]])\n"
     ]
    }
   ],
   "source": [
    "print(b.unsqueeze(0))    #维度为1*2*3\n",
    "print('\\n',b.unsqueeze(1))    #维度为2*1*3\n",
    "print('\\n',b.unsqueeze(2))    #维度为2*3*1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[[[0, 1, 2],\n",
      "           [3, 4, 5]]]]])\n"
     ]
    }
   ],
   "source": [
    "c=b.view(1,1,1,2,3)\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[[0, 1, 2],\n",
      "          [3, 4, 5]]]])\n",
      "tensor([[0, 1, 2],\n",
      "        [3, 4, 5]])\n",
      "tensor([[0, 1, 2],\n",
      "        [3, 4, 5]])\n"
     ]
    }
   ],
   "source": [
    "print(c.squeeze(0))  #压缩0维\n",
    "d=c\n",
    "for i in range(100):  #维度大于1的就无法压缩了\n",
    "    d=d.squeeze(0)\n",
    "print(d)\n",
    "print(c.squeeze()) #将所有维度为1的压缩"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "resize()是另一种用来调整size的方法,但它可以修改tensor的尺寸(不同于view)，即可以自动分配内存空间\n",
    "**从存储的角度讲，对tensor的操作可以分为两类：**\n",
    "- 不会修改自身数据，如a.add(b)，加法的结果返回一个新的tensor\n",
    "- 会修改自身数据，a.add_(b)，加法的结果仍存储在a中\n",
    "因为resize是会修改自身数据的，所以它的形式为：b.resize_()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0, 1, 2],\n",
      "        [3, 4, 5]])\n",
      "tensor([[0, 1, 2]])\n",
      "tensor([[0, 1, 2]])\n"
     ]
    }
   ],
   "source": [
    "print(b)\n",
    "print(b.resize_(1,3))\n",
    "print(b) #此时b已经改变"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[                0,                 1,                 2],\n",
      "        [                3,                 4,                 5],\n",
      "        [32651561162571873, 31525614010564703, 28992395054481520]])\n"
     ]
    }
   ],
   "source": [
    "print(b.resize_(3,3))  #如果没有其他操作覆盖这一块区域，原来的数据是会保留的，但多出来的数据会分配存储空间"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 索引操作"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.7543, -0.9249,  0.5559,  0.7851],\n",
      "        [-0.7450,  0.6337, -1.2043, -0.4728],\n",
      "        [ 1.0194, -0.6855,  0.2657,  0.9690]])\n",
      "torch.Size([3, 4])\n",
      "tensor([ 0.7543, -0.9249,  0.5559,  0.7851])\n",
      "tensor([ 0.7543, -0.9249,  0.5559,  0.7851])\n",
      "tensor([ 0.7543, -0.7450,  1.0194])\n"
     ]
    }
   ],
   "source": [
    "a=t.randn(3,4)\n",
    "print(a)\n",
    "print(a.shape)\n",
    "print(a[0])  #第一个维度(数为3)选取0，第二个维度(数为4)选取全部\n",
    "print(a[0,:])#同上\n",
    "print(a[:,0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.7543, -0.9249,  0.5559,  0.7851],\n",
      "        [-0.7450,  0.6337, -1.2043, -0.4728]])\n",
      "tensor([[ 0.7543, -0.9249],\n",
      "        [-0.7450,  0.6337]])\n"
     ]
    }
   ],
   "source": [
    "print(a[:2])  #前两行\n",
    "print(a[:2,0:2]) #前两行，前两列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[False, False, False, False],\n",
       "        [False, False, False, False],\n",
       "        [ True, False, False, False]])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a > 1 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([1.0194])\n",
      "tensor([1.0194])\n"
     ]
    }
   ],
   "source": [
    "b=a[a>1] #挑选出所有大于1的，等价于a.masked_select(a>1)\n",
    "print(b)\n",
    "print(a.masked_select(a>1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.7543, -0.9249,  0.5559,  0.7851],\n",
       "        [-0.7450,  0.6337, -1.2043, -0.4728]])"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a[t.LongTensor([0,1])] #第0行和第1行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Help on Tensor in module torch object:\n",
      "\n",
      "class Tensor(torch._C._TensorBase)\n",
      " |  Method resolution order:\n",
      " |      Tensor\n",
      " |      torch._C._TensorBase\n",
      " |      builtins.object\n",
      " |  \n",
      " |  Methods defined here:\n",
      " |  \n",
      " |  __abs__ = abs(...)\n",
      " |      abs() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.abs`\n",
      " |  \n",
      " |  __array__(self, dtype=None)\n",
      " |  \n",
      " |  __array_wrap__(self, array)\n",
      " |      # Wrap Numpy array again in a suitable tensor when done, to support e.g.\n",
      " |      # `numpy.sin(tensor) -> tensor` or `numpy.greater(tensor, 0) -> ByteTensor`\n",
      " |  \n",
      " |  __contains__(self, element)\n",
      " |      Check if `element` is present in tensor\n",
      " |      \n",
      " |      Arguments:\n",
      " |          element (Tensor or scalar): element to be checked\n",
      " |              for presence in current tensor\"\n",
      " |  \n",
      " |  __deepcopy__(self, memo)\n",
      " |  \n",
      " |  __dir__(self)\n",
      " |      __dir__() -> list\n",
      " |      default dir() implementation\n",
      " |  \n",
      " |  __eq__ = eq(...)\n",
      " |      eq(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.eq`\n",
      " |  \n",
      " |  __floordiv__(self, other)\n",
      " |  \n",
      " |  __format__(self, format_spec)\n",
      " |      default object formatter\n",
      " |  \n",
      " |  __ge__ = ge(...)\n",
      " |      ge(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ge`\n",
      " |  \n",
      " |  __gt__ = gt(...)\n",
      " |      gt(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.gt`\n",
      " |  \n",
      " |  __hash__(self)\n",
      " |      Return hash(self).\n",
      " |  \n",
      " |  __ipow__(self, other)\n",
      " |  \n",
      " |  __iter__(self)\n",
      " |  \n",
      " |  __itruediv__ = __idiv__(...)\n",
      " |  \n",
      " |  __le__ = le(...)\n",
      " |      le(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.le`\n",
      " |  \n",
      " |  __len__(self)\n",
      " |      Return len(self).\n",
      " |  \n",
      " |  __lt__ = lt(...)\n",
      " |      lt(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.lt`\n",
      " |  \n",
      " |  __ne__ = ne(...)\n",
      " |      ne(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ne`\n",
      " |  \n",
      " |  __neg__ = neg(...)\n",
      " |      neg() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.neg`\n",
      " |  \n",
      " |  __pow__ = pow(...)\n",
      " |      pow(exponent) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.pow`\n",
      " |  \n",
      " |  __rdiv__(self, other)\n",
      " |  \n",
      " |  __reduce_ex__(self, proto)\n",
      " |      helper for pickle\n",
      " |  \n",
      " |  __repr__(self)\n",
      " |      Return repr(self).\n",
      " |  \n",
      " |  __reversed__(self)\n",
      " |      Reverses the tensor along dimension 0.\n",
      " |  \n",
      " |  __rfloordiv__(self, other)\n",
      " |  \n",
      " |  __rpow__(self, other)\n",
      " |  \n",
      " |  __rsub__(self, other)\n",
      " |  \n",
      " |  __rtruediv__ = __rdiv__(self, other)\n",
      " |  \n",
      " |  __setstate__(self, state)\n",
      " |  \n",
      " |  align_to(self, *names)\n",
      " |      Permutes the dimensions of the :attr:`self` tensor to match the order\n",
      " |      specified in :attr:`names`, adding size-one dims for any new names.\n",
      " |      \n",
      " |      All of the dims of :attr:`self` must be named in order to use this method.\n",
      " |      The resulting tensor is a view on the original tensor.\n",
      " |      \n",
      " |      All dimension names of :attr:`self` must be present in :attr:`names`.\n",
      " |      :attr:`names` may contain additional names that are not in ``self.names``;\n",
      " |      the output tensor has a size-one dimension for each of those new names.\n",
      " |      \n",
      " |      :attr:`names` may contain up to one Ellipsis (``...``).\n",
      " |      The Ellipsis is expanded to be equal to all dimension names of :attr:`self`\n",
      " |      that are not mentioned in :attr:`names`, in the order that they appear\n",
      " |      in :attr:`self`.\n",
      " |      \n",
      " |      Python 2 does not support Ellipsis but one may use a string literal\n",
      " |      instead (``'...'``).\n",
      " |      \n",
      " |      Arguments:\n",
      " |          names (iterable of str): The desired dimension ordering of the\n",
      " |              output tensor. May contain up to one Ellipsis that is expanded\n",
      " |              to all unmentioned dim names of :attr:`self`.\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          >>> tensor = torch.randn(2, 2, 2, 2, 2, 2)\n",
      " |          >>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')\n",
      " |      \n",
      " |          # Move the F and E dims to the front while keeping the rest in order\n",
      " |          >>> named_tensor.align_to('F', 'E', ...)\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  backward(self, gradient=None, retain_graph=None, create_graph=False)\n",
      " |      Computes the gradient of current tensor w.r.t. graph leaves.\n",
      " |      \n",
      " |      The graph is differentiated using the chain rule. If the tensor is\n",
      " |      non-scalar (i.e. its data has more than one element) and requires\n",
      " |      gradient, the function additionally requires specifying ``gradient``.\n",
      " |      It should be a tensor of matching type and location, that contains\n",
      " |      the gradient of the differentiated function w.r.t. ``self``.\n",
      " |      \n",
      " |      This function accumulates gradients in the leaves - you might need to\n",
      " |      zero them before calling it.\n",
      " |      \n",
      " |      Arguments:\n",
      " |          gradient (Tensor or None): Gradient w.r.t. the\n",
      " |              tensor. If it is a tensor, it will be automatically converted\n",
      " |              to a Tensor that does not require grad unless ``create_graph`` is True.\n",
      " |              None values can be specified for scalar Tensors or ones that\n",
      " |              don't require grad. If a None value would be acceptable then\n",
      " |              this argument is optional.\n",
      " |          retain_graph (bool, optional): If ``False``, the graph used to compute\n",
      " |              the grads will be freed. Note that in nearly all cases setting\n",
      " |              this option to True is not needed and often can be worked around\n",
      " |              in a much more efficient way. Defaults to the value of\n",
      " |              ``create_graph``.\n",
      " |          create_graph (bool, optional): If ``True``, graph of the derivative will\n",
      " |              be constructed, allowing to compute higher order derivative\n",
      " |              products. Defaults to ``False``.\n",
      " |  \n",
      " |  detach(...)\n",
      " |      Returns a new Tensor, detached from the current graph.\n",
      " |      \n",
      " |      The result will never require gradient.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |        Returned Tensor shares the same storage with the original one.\n",
      " |        In-place modifications on either of them will be seen, and may trigger\n",
      " |        errors in correctness checks.\n",
      " |        IMPORTANT NOTE: Previously, in-place size / stride / storage changes\n",
      " |        (such as `resize_` / `resize_as_` / `set_` / `transpose_`) to the returned tensor\n",
      " |        also update the original tensor. Now, these in-place changes will not update the\n",
      " |        original tensor anymore, and will instead trigger an error.\n",
      " |        For sparse tensors:\n",
      " |        In-place indices / values changes (such as `zero_` / `copy_` / `add_`) to the\n",
      " |        returned tensor will not update the original tensor anymore, and will instead\n",
      " |        trigger an error.\n",
      " |  \n",
      " |  detach_(...)\n",
      " |      Detaches the Tensor from the graph that created it, making it a leaf.\n",
      " |      Views cannot be detached in-place.\n",
      " |  \n",
      " |  is_shared(self)\n",
      " |      Checks if tensor is in shared memory.\n",
      " |      \n",
      " |      This is always ``True`` for CUDA tensors.\n",
      " |  \n",
      " |  lu(self, pivot=True, get_infos=False)\n",
      " |      See :func:`torch.lu`\n",
      " |  \n",
      " |  norm(self, p='fro', dim=None, keepdim=False, dtype=None)\n",
      " |      See :func:`torch.norm`\n",
      " |  \n",
      " |  refine_names(self, *names)\n",
      " |      Refines the dimension names of :attr:`self` according to :attr:`names`.\n",
      " |      \n",
      " |      Refining is a special case of renaming that \"lifts\" unnamed dimensions.\n",
      " |      A ``None`` dim can be refined to have any name; a named dim can only be\n",
      " |      refined to have the same name.\n",
      " |      \n",
      " |      Because named tensors can coexist with unnamed tensors, refining names\n",
      " |      gives a nice way to write named-tensor-aware code that works with both\n",
      " |      named and unnamed tensors.\n",
      " |      \n",
      " |      :attr:`names` may contain up to one Ellipsis (``...``).\n",
      " |      The Ellipsis is expanded greedily; it is expanded in-place to fill\n",
      " |      :attr:`names` to the same length as ``self.dim()`` using names from the\n",
      " |      corresponding indices of ``self.names``.\n",
      " |      \n",
      " |      Python 2 does not support Ellipsis but one may use a string literal\n",
      " |      instead (``'...'``).\n",
      " |      \n",
      " |      Arguments:\n",
      " |          names (iterable of str): The desired names of the output tensor. May\n",
      " |              contain up to one Ellipsis.\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          >>> imgs = torch.randn(32, 3, 128, 128)\n",
      " |          >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W')\n",
      " |          >>> named_imgs.names\n",
      " |          ('N', 'C', 'H', 'W')\n",
      " |      \n",
      " |          >>> tensor = torch.randn(2, 3, 5, 7, 11)\n",
      " |          >>> tensor = tensor.refine_names('A', ..., 'B', 'C')\n",
      " |          >>> tensor.names\n",
      " |          ('A', None, None, 'B', 'C')\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  register_hook(self, hook)\n",
      " |      Registers a backward hook.\n",
      " |      \n",
      " |      The hook will be called every time a gradient with respect to the\n",
      " |      Tensor is computed. The hook should have the following signature::\n",
      " |      \n",
      " |          hook(grad) -> Tensor or None\n",
      " |      \n",
      " |      \n",
      " |      The hook should not modify its argument, but it can optionally return\n",
      " |      a new gradient which will be used in place of :attr:`grad`.\n",
      " |      \n",
      " |      This function returns a handle with a method ``handle.remove()``\n",
      " |      that removes the hook from the module.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> v = torch.tensor([0., 0., 0.], requires_grad=True)\n",
      " |          >>> h = v.register_hook(lambda grad: grad * 2)  # double the gradient\n",
      " |          >>> v.backward(torch.tensor([1., 2., 3.]))\n",
      " |          >>> v.grad\n",
      " |      \n",
      " |           2\n",
      " |           4\n",
      " |           6\n",
      " |          [torch.FloatTensor of size (3,)]\n",
      " |      \n",
      " |          >>> h.remove()  # removes the hook\n",
      " |  \n",
      " |  reinforce(self, reward)\n",
      " |  \n",
      " |  rename(self, *names, **rename_map)\n",
      " |      Renames dimension names of :attr:`self`.\n",
      " |      \n",
      " |      There are two main usages:\n",
      " |      \n",
      " |      ``self.rename(**rename_map)`` returns a view on tensor that has dims\n",
      " |      renamed as specified in the mapping :attr:`rename_map`.\n",
      " |      \n",
      " |      ``self.rename(*names)`` returns a view on tensor, renaming all\n",
      " |      dimensions positionally using :attr:`names`.\n",
      " |      Use ``self.rename(None)`` to drop names on a tensor.\n",
      " |      \n",
      " |      One cannot specify both positional args :attr:`names` and keyword args\n",
      " |      :attr:`rename_map`.\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          >>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))\n",
      " |          >>> renamed_imgs = imgs.rename(N='batch', C='channels')\n",
      " |          >>> renamed_imgs.names\n",
      " |          ('batch', 'channels', 'H', 'W')\n",
      " |      \n",
      " |          >>> renamed_imgs = imgs.rename(None)\n",
      " |          >>> renamed_imgs.names\n",
      " |          (None,)\n",
      " |      \n",
      " |          >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width')\n",
      " |          >>> renamed_imgs.names\n",
      " |          ('batch', 'channel', 'height', 'width')\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  rename_(self, *names, **rename_map)\n",
      " |      In-place version of :meth:`~Tensor.rename`.\n",
      " |  \n",
      " |  resize(self, *sizes)\n",
      " |  \n",
      " |  resize_as(self, tensor)\n",
      " |  \n",
      " |  retain_grad(self)\n",
      " |      Enables .grad attribute for non-leaf Tensors.\n",
      " |  \n",
      " |  share_memory_(self)\n",
      " |      Moves the underlying storage to shared memory.\n",
      " |      \n",
      " |      This is a no-op if the underlying storage is already in shared memory\n",
      " |      and for CUDA tensors. Tensors in shared memory cannot be resized.\n",
      " |  \n",
      " |  split(self, split_size, dim=0)\n",
      " |      See :func:`torch.split`\n",
      " |  \n",
      " |  stft(self, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True)\n",
      " |      See :func:`torch.stft`\n",
      " |      \n",
      " |      .. warning::\n",
      " |        This function changed signature at version 0.4.1. Calling with\n",
      " |        the previous signature may cause error or return incorrect result.\n",
      " |  \n",
      " |  unflatten(self, dim, namedshape)\n",
      " |      Unflattens the named dimension :attr:`dim`, viewing it in the shape\n",
      " |      specified by :attr:`namedshape`.\n",
      " |      \n",
      " |      Arguments:\n",
      " |          namedshape: (iterable of ``(name, size)`` tuples).\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          >>> flat_imgs = torch.rand(32, 3 * 128 * 128, names=('N', 'features'))\n",
      " |          >>> imgs = flat_imgs.unflatten('features', (('C', 3), ('H', 128), ('W', 128)))\n",
      " |          >>> imgs.names, images.shape\n",
      " |          (('N', 'C', 'H', 'W'), torch.Size([32, 3, 128, 128]))\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  unique(self, sorted=True, return_inverse=False, return_counts=False, dim=None)\n",
      " |      Returns the unique elements of the input tensor.\n",
      " |      \n",
      " |      See :func:`torch.unique`\n",
      " |  \n",
      " |  unique_consecutive(self, return_inverse=False, return_counts=False, dim=None)\n",
      " |      Eliminates all but the first element from every consecutive group of equivalent elements.\n",
      " |      \n",
      " |      See :func:`torch.unique_consecutive`\n",
      " |  \n",
      " |  ----------------------------------------------------------------------\n",
      " |  Data descriptors defined here:\n",
      " |  \n",
      " |  __cuda_array_interface__\n",
      " |      Array view description for cuda tensors.\n",
      " |      \n",
      " |      See:\n",
      " |      https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html\n",
      " |  \n",
      " |  __dict__\n",
      " |      dictionary for instance variables (if defined)\n",
      " |  \n",
      " |  __weakref__\n",
      " |      list of weak references to the object (if defined)\n",
      " |  \n",
      " |  ----------------------------------------------------------------------\n",
      " |  Data and other attributes defined here:\n",
      " |  \n",
      " |  __array_priority__ = 1000\n",
      " |  \n",
      " |  ----------------------------------------------------------------------\n",
      " |  Methods inherited from torch._C._TensorBase:\n",
      " |  \n",
      " |  __add__(...)\n",
      " |  \n",
      " |  __and__(...)\n",
      " |  \n",
      " |  __bool__(...)\n",
      " |  \n",
      " |  __delitem__(self, key, /)\n",
      " |      Delete self[key].\n",
      " |  \n",
      " |  __div__(...)\n",
      " |  \n",
      " |  __float__(...)\n",
      " |  \n",
      " |  __getitem__(self, key, /)\n",
      " |      Return self[key].\n",
      " |  \n",
      " |  __iadd__(...)\n",
      " |  \n",
      " |  __iand__(...)\n",
      " |  \n",
      " |  __idiv__(...)\n",
      " |  \n",
      " |  __ilshift__(...)\n",
      " |  \n",
      " |  __imul__(...)\n",
      " |  \n",
      " |  __index__(...)\n",
      " |  \n",
      " |  __int__(...)\n",
      " |  \n",
      " |  __invert__(...)\n",
      " |  \n",
      " |  __ior__(...)\n",
      " |  \n",
      " |  __irshift__(...)\n",
      " |  \n",
      " |  __isub__(...)\n",
      " |  \n",
      " |  __ixor__(...)\n",
      " |  \n",
      " |  __long__(...)\n",
      " |  \n",
      " |  __lshift__(...)\n",
      " |  \n",
      " |  __matmul__(...)\n",
      " |  \n",
      " |  __mod__(...)\n",
      " |  \n",
      " |  __mul__(...)\n",
      " |  \n",
      " |  __new__(*args, **kwargs) from builtins.type\n",
      " |      Create and return a new object.  See help(type) for accurate signature.\n",
      " |  \n",
      " |  __nonzero__(...)\n",
      " |  \n",
      " |  __or__(...)\n",
      " |  \n",
      " |  __radd__(...)\n",
      " |  \n",
      " |  __rmul__(...)\n",
      " |  \n",
      " |  __rshift__(...)\n",
      " |  \n",
      " |  __setitem__(self, key, value, /)\n",
      " |      Set self[key] to value.\n",
      " |  \n",
      " |  __sub__(...)\n",
      " |  \n",
      " |  __truediv__(...)\n",
      " |  \n",
      " |  __xor__(...)\n",
      " |  \n",
      " |  abs(...)\n",
      " |      abs() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.abs`\n",
      " |  \n",
      " |  abs_(...)\n",
      " |      abs_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.abs`\n",
      " |  \n",
      " |  acos(...)\n",
      " |      acos() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.acos`\n",
      " |  \n",
      " |  acos_(...)\n",
      " |      acos_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.acos`\n",
      " |  \n",
      " |  add(...)\n",
      " |      add(value) -> Tensor\n",
      " |      add(value=1, other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.add`\n",
      " |  \n",
      " |  add_(...)\n",
      " |      add_(value) -> Tensor\n",
      " |      add_(value=1, other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.add`\n",
      " |  \n",
      " |  addbmm(...)\n",
      " |      addbmm(beta=1, alpha=1, batch1, batch2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addbmm`\n",
      " |  \n",
      " |  addbmm_(...)\n",
      " |      addbmm_(beta=1, alpha=1, batch1, batch2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addbmm`\n",
      " |  \n",
      " |  addcdiv(...)\n",
      " |      addcdiv(value=1, tensor1, tensor2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addcdiv`\n",
      " |  \n",
      " |  addcdiv_(...)\n",
      " |      addcdiv_(value=1, tensor1, tensor2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addcdiv`\n",
      " |  \n",
      " |  addcmul(...)\n",
      " |      addcmul(value=1, tensor1, tensor2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addcmul`\n",
      " |  \n",
      " |  addcmul_(...)\n",
      " |      addcmul_(value=1, tensor1, tensor2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addcmul`\n",
      " |  \n",
      " |  addmm(...)\n",
      " |      addmm(beta=1, alpha=1, mat1, mat2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addmm`\n",
      " |  \n",
      " |  addmm_(...)\n",
      " |      addmm_(beta=1, alpha=1, mat1, mat2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addmm`\n",
      " |  \n",
      " |  addmv(...)\n",
      " |      addmv(beta=1, alpha=1, mat, vec) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addmv`\n",
      " |  \n",
      " |  addmv_(...)\n",
      " |      addmv_(beta=1, alpha=1, mat, vec) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addmv`\n",
      " |  \n",
      " |  addr(...)\n",
      " |      addr(beta=1, alpha=1, vec1, vec2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.addr`\n",
      " |  \n",
      " |  addr_(...)\n",
      " |      addr_(beta=1, alpha=1, vec1, vec2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.addr`\n",
      " |  \n",
      " |  align_as(...)\n",
      " |      align_as(other) -> Tensor\n",
      " |      \n",
      " |      Permutes the dimensions of the :attr:`self` tensor to match the dimension order\n",
      " |      in the :attr:`other` tensor, adding size-one dims for any new names.\n",
      " |      \n",
      " |      This operation is useful for explicit broadcasting by names (see examples).\n",
      " |      \n",
      " |      All of the dims of :attr:`self` must be named in order to use this method.\n",
      " |      The resulting tensor is a view on the original tensor.\n",
      " |      \n",
      " |      All dimension names of :attr:`self` must be present in ``other.names``.\n",
      " |      :attr:`other` may contain named dimensions that are not in ``self.names``;\n",
      " |      the output tensor has a size-one dimension for each of those new names.\n",
      " |      \n",
      " |      To align a tensor to a specific order, use :meth:`~Tensor.align_to`.\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          # Example 1: Applying a mask\n",
      " |          >>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H')\n",
      " |          >>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C'))\n",
      " |          >>> imgs.masked_fill_(mask.align_as(imgs), 0)\n",
      " |      \n",
      " |      \n",
      " |          # Example 2: Applying a per-channel-scale\n",
      " |          def scale_channels(input, scale):\n",
      " |              scale = scale.refine_names('C')\n",
      " |              return input * scale.align_as(input)\n",
      " |      \n",
      " |          >>> num_channels = 3\n",
      " |          >>> scale = torch.randn(num_channels, names='C')\n",
      " |          >>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C'))\n",
      " |          >>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W'))\n",
      " |          >>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D'))\n",
      " |      \n",
      " |          # scale_channels is agnostic to the dimension order of the input\n",
      " |          >>> scale_channels(imgs, scale)\n",
      " |          >>> scale_channels(more_imgs, scale)\n",
      " |          >>> scale_channels(videos, scale)\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  all(...)\n",
      " |      .. function:: all() -> bool\n",
      " |      \n",
      " |      Returns True if all elements in the tensor are True, False otherwise.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.rand(1, 2).bool()\n",
      " |          >>> a\n",
      " |          tensor([[False, True]], dtype=torch.bool)\n",
      " |          >>> a.all()\n",
      " |          tensor(False, dtype=torch.bool)\n",
      " |      \n",
      " |      .. function:: all(dim, keepdim=False, out=None) -> Tensor\n",
      " |      \n",
      " |      Returns True if all elements in each row of the tensor in the given\n",
      " |      dimension :attr:`dim` are True, False otherwise.\n",
      " |      \n",
      " |      If :attr:`keepdim` is ``True``, the output tensor is of the same size as\n",
      " |      :attr:`input` except in the dimension :attr:`dim` where it is of size 1.\n",
      " |      Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting\n",
      " |      in the output tensor having 1 fewer dimension than :attr:`input`.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): the dimension to reduce\n",
      " |          keepdim (bool): whether the output tensor has :attr:`dim` retained or not\n",
      " |          out (Tensor, optional): the output tensor\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.rand(4, 2).bool()\n",
      " |          >>> a\n",
      " |          tensor([[True, True],\n",
      " |                  [True, False],\n",
      " |                  [True, True],\n",
      " |                  [True, True]], dtype=torch.bool)\n",
      " |          >>> a.all(dim=1)\n",
      " |          tensor([ True, False,  True,  True], dtype=torch.bool)\n",
      " |          >>> a.all(dim=0)\n",
      " |          tensor([ True, False], dtype=torch.bool)\n",
      " |  \n",
      " |  allclose(...)\n",
      " |      allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.allclose`\n",
      " |  \n",
      " |  any(...)\n",
      " |      .. function:: any() -> bool\n",
      " |      \n",
      " |      Returns True if any elements in the tensor are True, False otherwise.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.rand(1, 2).bool()\n",
      " |          >>> a\n",
      " |          tensor([[False, True]], dtype=torch.bool)\n",
      " |          >>> a.any()\n",
      " |          tensor(True, dtype=torch.bool)\n",
      " |      .. function:: any(dim, keepdim=False, out=None) -> Tensor\n",
      " |      \n",
      " |      Returns True if any elements in each row of the tensor in the given\n",
      " |      dimension :attr:`dim` are True, False otherwise.\n",
      " |      \n",
      " |      If :attr:`keepdim` is ``True``, the output tensor is of the same size as\n",
      " |      :attr:`input` except in the dimension :attr:`dim` where it is of size 1.\n",
      " |      Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting\n",
      " |      in the output tensor having 1 fewer dimension than :attr:`input`.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): the dimension to reduce\n",
      " |          keepdim (bool): whether the output tensor has :attr:`dim` retained or not\n",
      " |          out (Tensor, optional): the output tensor\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.randn(4, 2) < 0\n",
      " |          >>> a\n",
      " |          tensor([[ True,  True],\n",
      " |                  [False,  True],\n",
      " |                  [ True,  True],\n",
      " |                  [False, False]])\n",
      " |          >>> a.any(1)\n",
      " |          tensor([ True,  True,  True, False])\n",
      " |          >>> a.any(0)\n",
      " |          tensor([True, True])\n",
      " |  \n",
      " |  apply_(...)\n",
      " |      apply_(callable) -> Tensor\n",
      " |      \n",
      " |      Applies the function :attr:`callable` to each element in the tensor, replacing\n",
      " |      each element with the value returned by :attr:`callable`.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          This function only works with CPU tensors and should not be used in code\n",
      " |          sections that require high performance.\n",
      " |  \n",
      " |  argmax(...)\n",
      " |      argmax(dim=None, keepdim=False) -> LongTensor\n",
      " |      \n",
      " |      See :func:`torch.argmax`\n",
      " |  \n",
      " |  argmin(...)\n",
      " |      argmin(dim=None, keepdim=False) -> LongTensor\n",
      " |      \n",
      " |      See :func:`torch.argmin`\n",
      " |  \n",
      " |  argsort(...)\n",
      " |      argsort(dim=-1, descending=False) -> LongTensor\n",
      " |      \n",
      " |      See :func: `torch.argsort`\n",
      " |  \n",
      " |  as_strided(...)\n",
      " |      as_strided(size, stride, storage_offset=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.as_strided`\n",
      " |  \n",
      " |  as_strided_(...)\n",
      " |  \n",
      " |  asin(...)\n",
      " |      asin() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.asin`\n",
      " |  \n",
      " |  asin_(...)\n",
      " |      asin_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.asin`\n",
      " |  \n",
      " |  atan(...)\n",
      " |      atan() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.atan`\n",
      " |  \n",
      " |  atan2(...)\n",
      " |      atan2(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.atan2`\n",
      " |  \n",
      " |  atan2_(...)\n",
      " |      atan2_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.atan2`\n",
      " |  \n",
      " |  atan_(...)\n",
      " |      atan_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.atan`\n",
      " |  \n",
      " |  baddbmm(...)\n",
      " |      baddbmm(beta=1, alpha=1, batch1, batch2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.baddbmm`\n",
      " |  \n",
      " |  baddbmm_(...)\n",
      " |      baddbmm_(beta=1, alpha=1, batch1, batch2) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.baddbmm`\n",
      " |  \n",
      " |  bernoulli(...)\n",
      " |      bernoulli(*, generator=None) -> Tensor\n",
      " |      \n",
      " |      Returns a result tensor where each :math:`\\texttt{result[i]}` is independently\n",
      " |      sampled from :math:`\\text{Bernoulli}(\\texttt{self[i]})`. :attr:`self` must have\n",
      " |      floating point ``dtype``, and the result will have the same ``dtype``.\n",
      " |      \n",
      " |      See :func:`torch.bernoulli`\n",
      " |  \n",
      " |  bernoulli_(...)\n",
      " |      .. function:: bernoulli_(p=0.5, *, generator=None) -> Tensor\n",
      " |      \n",
      " |          Fills each location of :attr:`self` with an independent sample from\n",
      " |          :math:`\\text{Bernoulli}(\\texttt{p})`. :attr:`self` can have integral\n",
      " |          ``dtype``.\n",
      " |      \n",
      " |      .. function:: bernoulli_(p_tensor, *, generator=None) -> Tensor\n",
      " |      \n",
      " |          :attr:`p_tensor` should be a tensor containing probabilities to be used for\n",
      " |          drawing the binary random number.\n",
      " |      \n",
      " |          The :math:`\\text{i}^{th}` element of :attr:`self` tensor will be set to a\n",
      " |          value sampled from :math:`\\text{Bernoulli}(\\texttt{p\\_tensor[i]})`.\n",
      " |      \n",
      " |          :attr:`self` can have integral ``dtype``, but :attr:`p_tensor` must have\n",
      " |          floating point ``dtype``.\n",
      " |      \n",
      " |      See also :meth:`~Tensor.bernoulli` and :func:`torch.bernoulli`\n",
      " |  \n",
      " |  bfloat16(...)\n",
      " |      bfloat16() -> Tensor\n",
      " |      ``self.bfloat16()`` is equivalent to ``self.to(torch.bfloat16)``. See :func:`to`.\n",
      " |  \n",
      " |  bincount(...)\n",
      " |      bincount(weights=None, minlength=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.bincount`\n",
      " |  \n",
      " |  bitwise_not(...)\n",
      " |      bitwise_not() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.bitwise_not`\n",
      " |  \n",
      " |  bitwise_not_(...)\n",
      " |      bitwise_not_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.bitwise_not`\n",
      " |  \n",
      " |  bmm(...)\n",
      " |      bmm(batch2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.bmm`\n",
      " |  \n",
      " |  bool(...)\n",
      " |      bool() -> Tensor\n",
      " |      \n",
      " |      ``self.bool()`` is equivalent to ``self.to(torch.bool)``. See :func:`to`.\n",
      " |  \n",
      " |  byte(...)\n",
      " |      byte() -> Tensor\n",
      " |      \n",
      " |      ``self.byte()`` is equivalent to ``self.to(torch.uint8)``. See :func:`to`.\n",
      " |  \n",
      " |  cauchy_(...)\n",
      " |      cauchy_(median=0, sigma=1, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      Fills the tensor with numbers drawn from the Cauchy distribution:\n",
      " |      \n",
      " |      .. math::\n",
      " |      \n",
      " |          f(x) = \\dfrac{1}{\\pi} \\dfrac{\\sigma}{(x - \\text{median})^2 + \\sigma^2}\n",
      " |  \n",
      " |  ceil(...)\n",
      " |      ceil() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ceil`\n",
      " |  \n",
      " |  ceil_(...)\n",
      " |      ceil_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.ceil`\n",
      " |  \n",
      " |  char(...)\n",
      " |      char() -> Tensor\n",
      " |      \n",
      " |      ``self.char()`` is equivalent to ``self.to(torch.int8)``. See :func:`to`.\n",
      " |  \n",
      " |  cholesky(...)\n",
      " |      cholesky(upper=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cholesky`\n",
      " |  \n",
      " |  cholesky_inverse(...)\n",
      " |      cholesky_inverse(upper=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cholesky_inverse`\n",
      " |  \n",
      " |  cholesky_solve(...)\n",
      " |      cholesky_solve(input2, upper=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cholesky_solve`\n",
      " |  \n",
      " |  chunk(...)\n",
      " |      chunk(chunks, dim=0) -> List of Tensors\n",
      " |      \n",
      " |      See :func:`torch.chunk`\n",
      " |  \n",
      " |  clamp(...)\n",
      " |      clamp(min, max) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.clamp`\n",
      " |  \n",
      " |  clamp_(...)\n",
      " |      clamp_(min, max) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.clamp`\n",
      " |  \n",
      " |  clamp_max(...)\n",
      " |  \n",
      " |  clamp_max_(...)\n",
      " |  \n",
      " |  clamp_min(...)\n",
      " |  \n",
      " |  clamp_min_(...)\n",
      " |  \n",
      " |  clone(...)\n",
      " |      clone() -> Tensor\n",
      " |      \n",
      " |      Returns a copy of the :attr:`self` tensor. The copy has the same size and data\n",
      " |      type as :attr:`self`.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          Unlike `copy_()`, this function is recorded in the computation graph. Gradients\n",
      " |          propagating to the cloned tensor will propagate to the original tensor.\n",
      " |  \n",
      " |  coalesce(...)\n",
      " |  \n",
      " |  contiguous(...)\n",
      " |      contiguous() -> Tensor\n",
      " |      \n",
      " |      Returns a contiguous tensor containing the same data as :attr:`self` tensor. If\n",
      " |      :attr:`self` tensor is contiguous, this function returns the :attr:`self`\n",
      " |      tensor.\n",
      " |  \n",
      " |  copy_(...)\n",
      " |      copy_(src, non_blocking=False) -> Tensor\n",
      " |      \n",
      " |      Copies the elements from :attr:`src` into :attr:`self` tensor and returns\n",
      " |      :attr:`self`.\n",
      " |      \n",
      " |      The :attr:`src` tensor must be :ref:`broadcastable <broadcasting-semantics>`\n",
      " |      with the :attr:`self` tensor. It may be of a different data type or reside on a\n",
      " |      different device.\n",
      " |      \n",
      " |      Args:\n",
      " |          src (Tensor): the source tensor to copy from\n",
      " |          non_blocking (bool): if ``True`` and this copy is between CPU and GPU,\n",
      " |              the copy may occur asynchronously with respect to the host. For other\n",
      " |              cases, this argument has no effect.\n",
      " |  \n",
      " |  cos(...)\n",
      " |      cos() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cos`\n",
      " |  \n",
      " |  cos_(...)\n",
      " |      cos_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.cos`\n",
      " |  \n",
      " |  cosh(...)\n",
      " |      cosh() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cosh`\n",
      " |  \n",
      " |  cosh_(...)\n",
      " |      cosh_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.cosh`\n",
      " |  \n",
      " |  cpu(...)\n",
      " |      cpu() -> Tensor\n",
      " |      \n",
      " |      Returns a copy of this object in CPU memory.\n",
      " |      \n",
      " |      If this object is already in CPU memory and on the correct device,\n",
      " |      then no copy is performed and the original object is returned.\n",
      " |  \n",
      " |  cross(...)\n",
      " |      cross(other, dim=-1) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cross`\n",
      " |  \n",
      " |  cuda(...)\n",
      " |      cuda(device=None, non_blocking=False) -> Tensor\n",
      " |      \n",
      " |      Returns a copy of this object in CUDA memory.\n",
      " |      \n",
      " |      If this object is already in CUDA memory and on the correct device,\n",
      " |      then no copy is performed and the original object is returned.\n",
      " |      \n",
      " |      Args:\n",
      " |          device (:class:`torch.device`): The destination GPU device.\n",
      " |              Defaults to the current CUDA device.\n",
      " |          non_blocking (bool): If ``True`` and the source is in pinned memory,\n",
      " |              the copy will be asynchronous with respect to the host.\n",
      " |              Otherwise, the argument has no effect. Default: ``False``.\n",
      " |  \n",
      " |  cumprod(...)\n",
      " |      cumprod(dim, dtype=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cumprod`\n",
      " |  \n",
      " |  cumsum(...)\n",
      " |      cumsum(dim, dtype=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.cumsum`\n",
      " |  \n",
      " |  data_ptr(...)\n",
      " |      data_ptr() -> int\n",
      " |      \n",
      " |      Returns the address of the first element of :attr:`self` tensor.\n",
      " |  \n",
      " |  dense_dim(...)\n",
      " |      dense_dim() -> int\n",
      " |      \n",
      " |      If :attr:`self` is a sparse COO tensor (i.e., with ``torch.sparse_coo`` layout),\n",
      " |      this returns the number of dense dimensions. Otherwise, this throws an error.\n",
      " |      \n",
      " |      See also :meth:`Tensor.sparse_dim`.\n",
      " |  \n",
      " |  dequantize(...)\n",
      " |      dequantize() -> Tensor\n",
      " |      \n",
      " |      Given a quantized Tensor, dequantize it and return the dequantized float Tensor.\n",
      " |  \n",
      " |  det(...)\n",
      " |      det() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.det`\n",
      " |  \n",
      " |  diag(...)\n",
      " |      diag(diagonal=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.diag`\n",
      " |  \n",
      " |  diag_embed(...)\n",
      " |      diag_embed(offset=0, dim1=-2, dim2=-1) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.diag_embed`\n",
      " |  \n",
      " |  diagflat(...)\n",
      " |      diagflat(offset=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.diagflat`\n",
      " |  \n",
      " |  diagonal(...)\n",
      " |      diagonal(offset=0, dim1=0, dim2=1) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.diagonal`\n",
      " |  \n",
      " |  digamma(...)\n",
      " |      digamma() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.digamma`\n",
      " |  \n",
      " |  digamma_(...)\n",
      " |      digamma_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.digamma`\n",
      " |  \n",
      " |  dim(...)\n",
      " |      dim() -> int\n",
      " |      \n",
      " |      Returns the number of dimensions of :attr:`self` tensor.\n",
      " |  \n",
      " |  dist(...)\n",
      " |      dist(other, p=2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.dist`\n",
      " |  \n",
      " |  div(...)\n",
      " |      div(value) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.div`\n",
      " |  \n",
      " |  div_(...)\n",
      " |      div_(value) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.div`\n",
      " |  \n",
      " |  dot(...)\n",
      " |      dot(tensor2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.dot`\n",
      " |  \n",
      " |  double(...)\n",
      " |      double() -> Tensor\n",
      " |      \n",
      " |      ``self.double()`` is equivalent to ``self.to(torch.float64)``. See :func:`to`.\n",
      " |  \n",
      " |  eig(...)\n",
      " |      eig(eigenvectors=False) -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.eig`\n",
      " |  \n",
      " |  element_size(...)\n",
      " |      element_size() -> int\n",
      " |      \n",
      " |      Returns the size in bytes of an individual element.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> torch.tensor([]).element_size()\n",
      " |          4\n",
      " |          >>> torch.tensor([], dtype=torch.uint8).element_size()\n",
      " |          1\n",
      " |  \n",
      " |  eq(...)\n",
      " |      eq(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.eq`\n",
      " |  \n",
      " |  eq_(...)\n",
      " |      eq_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.eq`\n",
      " |  \n",
      " |  equal(...)\n",
      " |      equal(other) -> bool\n",
      " |      \n",
      " |      See :func:`torch.equal`\n",
      " |  \n",
      " |  erf(...)\n",
      " |      erf() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.erf`\n",
      " |  \n",
      " |  erf_(...)\n",
      " |      erf_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.erf`\n",
      " |  \n",
      " |  erfc(...)\n",
      " |      erfc() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.erfc`\n",
      " |  \n",
      " |  erfc_(...)\n",
      " |      erfc_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.erfc`\n",
      " |  \n",
      " |  erfinv(...)\n",
      " |      erfinv() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.erfinv`\n",
      " |  \n",
      " |  erfinv_(...)\n",
      " |      erfinv_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.erfinv`\n",
      " |  \n",
      " |  exp(...)\n",
      " |      exp() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.exp`\n",
      " |  \n",
      " |  exp_(...)\n",
      " |      exp_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.exp`\n",
      " |  \n",
      " |  expand(...)\n",
      " |      expand(*sizes) -> Tensor\n",
      " |      \n",
      " |      Returns a new view of the :attr:`self` tensor with singleton dimensions expanded\n",
      " |      to a larger size.\n",
      " |      \n",
      " |      Passing -1 as the size for a dimension means not changing the size of\n",
      " |      that dimension.\n",
      " |      \n",
      " |      Tensor can be also expanded to a larger number of dimensions, and the\n",
      " |      new ones will be appended at the front. For the new dimensions, the\n",
      " |      size cannot be set to -1.\n",
      " |      \n",
      " |      Expanding a tensor does not allocate new memory, but only creates a\n",
      " |      new view on the existing tensor where a dimension of size one is\n",
      " |      expanded to a larger size by setting the ``stride`` to 0. Any dimension\n",
      " |      of size 1 can be expanded to an arbitrary value without allocating new\n",
      " |      memory.\n",
      " |      \n",
      " |      Args:\n",
      " |          *sizes (torch.Size or int...): the desired expanded size\n",
      " |      \n",
      " |      .. warning::\n",
      " |      \n",
      " |          More than one element of an expanded tensor may refer to a single\n",
      " |          memory location. As a result, in-place operations (especially ones that\n",
      " |          are vectorized) may result in incorrect behavior. If you need to write\n",
      " |          to the tensors, please clone them first.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([[1], [2], [3]])\n",
      " |          >>> x.size()\n",
      " |          torch.Size([3, 1])\n",
      " |          >>> x.expand(3, 4)\n",
      " |          tensor([[ 1,  1,  1,  1],\n",
      " |                  [ 2,  2,  2,  2],\n",
      " |                  [ 3,  3,  3,  3]])\n",
      " |          >>> x.expand(-1, 4)   # -1 means not changing the size of that dimension\n",
      " |          tensor([[ 1,  1,  1,  1],\n",
      " |                  [ 2,  2,  2,  2],\n",
      " |                  [ 3,  3,  3,  3]])\n",
      " |  \n",
      " |  expand_as(...)\n",
      " |      expand_as(other) -> Tensor\n",
      " |      \n",
      " |      Expand this tensor to the same size as :attr:`other`.\n",
      " |      ``self.expand_as(other)`` is equivalent to ``self.expand(other.size())``.\n",
      " |      \n",
      " |      Please see :meth:`~Tensor.expand` for more information about ``expand``.\n",
      " |      \n",
      " |      Args:\n",
      " |          other (:class:`torch.Tensor`): The result tensor has the same size\n",
      " |              as :attr:`other`.\n",
      " |  \n",
      " |  expm1(...)\n",
      " |      expm1() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.expm1`\n",
      " |  \n",
      " |  expm1_(...)\n",
      " |      expm1_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.expm1`\n",
      " |  \n",
      " |  exponential_(...)\n",
      " |      exponential_(lambd=1, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with elements drawn from the exponential distribution:\n",
      " |      \n",
      " |      .. math::\n",
      " |      \n",
      " |          f(x) = \\lambda e^{-\\lambda x}\n",
      " |  \n",
      " |  fft(...)\n",
      " |      fft(signal_ndim, normalized=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.fft`\n",
      " |  \n",
      " |  fill_(...)\n",
      " |      fill_(value) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with the specified value.\n",
      " |  \n",
      " |  fill_diagonal_(...)\n",
      " |      fill_diagonal_(fill_value, wrap=False) -> Tensor\n",
      " |      \n",
      " |      Fill the main diagonal of a tensor that has at least 2-dimensions.\n",
      " |      When dims>2, all dimensions of input must be of equal length.\n",
      " |      This function modifies the input tensor in-place, and returns the input tensor.\n",
      " |      \n",
      " |      Arguments:\n",
      " |          fill_value (Scalar): the fill value\n",
      " |          wrap (bool): the diagonal 'wrapped' after N columns for tall matrices.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.zeros(3, 3)\n",
      " |          >>> a.fill_diagonal_(5)\n",
      " |          tensor([[5., 0., 0.],\n",
      " |                  [0., 5., 0.],\n",
      " |                  [0., 0., 5.]])\n",
      " |          >>> b = torch.zeros(7, 3)\n",
      " |          >>> b.fill_diagonal_(5)\n",
      " |          tensor([[5., 0., 0.],\n",
      " |                  [0., 5., 0.],\n",
      " |                  [0., 0., 5.],\n",
      " |                  [0., 0., 0.],\n",
      " |                  [0., 0., 0.],\n",
      " |                  [0., 0., 0.],\n",
      " |                  [0., 0., 0.]])\n",
      " |          >>> c = torch.zeros(7, 3)\n",
      " |          >>> c.fill_diagonal_(5, wrap=True)\n",
      " |          tensor([[5., 0., 0.],\n",
      " |                  [0., 5., 0.],\n",
      " |                  [0., 0., 5.],\n",
      " |                  [0., 0., 0.],\n",
      " |                  [5., 0., 0.],\n",
      " |                  [0., 5., 0.],\n",
      " |                  [0., 0., 5.]])\n",
      " |  \n",
      " |  flatten(...)\n",
      " |      flatten(input, start_dim=0, end_dim=-1) -> Tensor\n",
      " |      \n",
      " |      see :func:`torch.flatten`\n",
      " |  \n",
      " |  flip(...)\n",
      " |      flip(dims) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.flip`\n",
      " |  \n",
      " |  float(...)\n",
      " |      float() -> Tensor\n",
      " |      \n",
      " |      ``self.float()`` is equivalent to ``self.to(torch.float32)``. See :func:`to`.\n",
      " |  \n",
      " |  floor(...)\n",
      " |      floor() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.floor`\n",
      " |  \n",
      " |  floor_(...)\n",
      " |      floor_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.floor`\n",
      " |  \n",
      " |  fmod(...)\n",
      " |      fmod(divisor) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.fmod`\n",
      " |  \n",
      " |  fmod_(...)\n",
      " |      fmod_(divisor) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.fmod`\n",
      " |  \n",
      " |  frac(...)\n",
      " |      frac() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.frac`\n",
      " |  \n",
      " |  frac_(...)\n",
      " |      frac_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.frac`\n",
      " |  \n",
      " |  gather(...)\n",
      " |      gather(dim, index) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.gather`\n",
      " |  \n",
      " |  ge(...)\n",
      " |      ge(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ge`\n",
      " |  \n",
      " |  ge_(...)\n",
      " |      ge_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.ge`\n",
      " |  \n",
      " |  geometric_(...)\n",
      " |      geometric_(p, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with elements drawn from the geometric distribution:\n",
      " |      \n",
      " |      .. math::\n",
      " |      \n",
      " |          f(X=k) = p^{k - 1} (1 - p)\n",
      " |  \n",
      " |  geqrf(...)\n",
      " |      geqrf() -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.geqrf`\n",
      " |  \n",
      " |  ger(...)\n",
      " |      ger(vec2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ger`\n",
      " |  \n",
      " |  get_device(...)\n",
      " |      get_device() -> Device ordinal (Integer)\n",
      " |      \n",
      " |      For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides.\n",
      " |      For CPU tensors, an error is thrown.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.randn(3, 4, 5, device='cuda:0')\n",
      " |          >>> x.get_device()\n",
      " |          0\n",
      " |          >>> x.cpu().get_device()  # RuntimeError: get_device is not implemented for type torch.FloatTensor\n",
      " |  \n",
      " |  gt(...)\n",
      " |      gt(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.gt`\n",
      " |  \n",
      " |  gt_(...)\n",
      " |      gt_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.gt`\n",
      " |  \n",
      " |  half(...)\n",
      " |      half() -> Tensor\n",
      " |      \n",
      " |      ``self.half()`` is equivalent to ``self.to(torch.float16)``. See :func:`to`.\n",
      " |  \n",
      " |  hardshrink(...)\n",
      " |      hardshrink(lambd=0.5) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.nn.functional.hardshrink`\n",
      " |  \n",
      " |  has_names(...)\n",
      " |      Is ``True`` if any of this tensor's dimensions are named. Otherwise, is ``False``.\n",
      " |  \n",
      " |  histc(...)\n",
      " |      histc(bins=100, min=0, max=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.histc`\n",
      " |  \n",
      " |  ifft(...)\n",
      " |      ifft(signal_ndim, normalized=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ifft`\n",
      " |  \n",
      " |  index_add(...)\n",
      " |      index_add(dim, index, tensor) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.index_add_`\n",
      " |  \n",
      " |  index_add_(...)\n",
      " |      index_add_(dim, index, tensor) -> Tensor\n",
      " |      \n",
      " |      Accumulate the elements of :attr:`tensor` into the :attr:`self` tensor by adding\n",
      " |      to the indices in the order given in :attr:`index`. For example, if ``dim == 0``\n",
      " |      and ``index[i] == j``, then the ``i``\\ th row of :attr:`tensor` is added to the\n",
      " |      ``j``\\ th row of :attr:`self`.\n",
      " |      \n",
      " |      The :attr:`dim`\\ th dimension of :attr:`tensor` must have the same size as the\n",
      " |      length of :attr:`index` (which must be a vector), and all other dimensions must\n",
      " |      match :attr:`self`, or an error will be raised.\n",
      " |      \n",
      " |      .. include:: cuda_deterministic.rst\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): dimension along which to index\n",
      " |          index (LongTensor): indices of :attr:`tensor` to select from\n",
      " |          tensor (Tensor): the tensor containing values to add\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.ones(5, 3)\n",
      " |          >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n",
      " |          >>> index = torch.tensor([0, 4, 2])\n",
      " |          >>> x.index_add_(0, index, t)\n",
      " |          tensor([[  2.,   3.,   4.],\n",
      " |                  [  1.,   1.,   1.],\n",
      " |                  [  8.,   9.,  10.],\n",
      " |                  [  1.,   1.,   1.],\n",
      " |                  [  5.,   6.,   7.]])\n",
      " |  \n",
      " |  index_copy(...)\n",
      " |      index_copy(dim, index, tensor) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.index_copy_`\n",
      " |  \n",
      " |  index_copy_(...)\n",
      " |      index_copy_(dim, index, tensor) -> Tensor\n",
      " |      \n",
      " |      Copies the elements of :attr:`tensor` into the :attr:`self` tensor by selecting\n",
      " |      the indices in the order given in :attr:`index`. For example, if ``dim == 0``\n",
      " |      and ``index[i] == j``, then the ``i``\\ th row of :attr:`tensor` is copied to the\n",
      " |      ``j``\\ th row of :attr:`self`.\n",
      " |      \n",
      " |      The :attr:`dim`\\ th dimension of :attr:`tensor` must have the same size as the\n",
      " |      length of :attr:`index` (which must be a vector), and all other dimensions must\n",
      " |      match :attr:`self`, or an error will be raised.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): dimension along which to index\n",
      " |          index (LongTensor): indices of :attr:`tensor` to select from\n",
      " |          tensor (Tensor): the tensor containing values to copy\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.zeros(5, 3)\n",
      " |          >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n",
      " |          >>> index = torch.tensor([0, 4, 2])\n",
      " |          >>> x.index_copy_(0, index, t)\n",
      " |          tensor([[ 1.,  2.,  3.],\n",
      " |                  [ 0.,  0.,  0.],\n",
      " |                  [ 7.,  8.,  9.],\n",
      " |                  [ 0.,  0.,  0.],\n",
      " |                  [ 4.,  5.,  6.]])\n",
      " |  \n",
      " |  index_fill(...)\n",
      " |      index_fill(dim, index, value) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.index_fill_`\n",
      " |  \n",
      " |  index_fill_(...)\n",
      " |      index_fill_(dim, index, val) -> Tensor\n",
      " |      \n",
      " |      Fills the elements of the :attr:`self` tensor with value :attr:`val` by\n",
      " |      selecting the indices in the order given in :attr:`index`.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): dimension along which to index\n",
      " |          index (LongTensor): indices of :attr:`self` tensor to fill in\n",
      " |          val (float): the value to fill with\n",
      " |      \n",
      " |      Example::\n",
      " |          >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n",
      " |          >>> index = torch.tensor([0, 2])\n",
      " |          >>> x.index_fill_(1, index, -1)\n",
      " |          tensor([[-1.,  2., -1.],\n",
      " |                  [-1.,  5., -1.],\n",
      " |                  [-1.,  8., -1.]])\n",
      " |  \n",
      " |  index_put(...)\n",
      " |      index_put(indices, value, accumulate=False) -> Tensor\n",
      " |      \n",
      " |      Out-place version of :meth:`~Tensor.index_put_`\n",
      " |  \n",
      " |  index_put_(...)\n",
      " |      index_put_(indices, value, accumulate=False) -> Tensor\n",
      " |      \n",
      " |      Puts values from the tensor :attr:`value` into the tensor :attr:`self` using\n",
      " |      the indices specified in :attr:`indices` (which is a tuple of Tensors). The\n",
      " |      expression ``tensor.index_put_(indices, value)`` is equivalent to\n",
      " |      ``tensor[indices] = value``. Returns :attr:`self`.\n",
      " |      \n",
      " |      If :attr:`accumulate` is ``True``, the elements in :attr:`tensor` are added to\n",
      " |      :attr:`self`. If accumulate is ``False``, the behavior is undefined if indices\n",
      " |      contain duplicate elements.\n",
      " |      \n",
      " |      Args:\n",
      " |          indices (tuple of LongTensor): tensors used to index into `self`.\n",
      " |          value (Tensor): tensor of same dtype as `self`.\n",
      " |          accumulate (bool): whether to accumulate into self\n",
      " |  \n",
      " |  index_select(...)\n",
      " |      index_select(dim, index) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.index_select`\n",
      " |  \n",
      " |  indices(...)\n",
      " |      indices() -> Tensor\n",
      " |      \n",
      " |      If :attr:`self` is a sparse COO tensor (i.e., with ``torch.sparse_coo`` layout),\n",
      " |      this returns a view of the contained indices tensor. Otherwise, this throws an\n",
      " |      error.\n",
      " |      \n",
      " |      See also :meth:`Tensor.values`.\n",
      " |      \n",
      " |      .. note::\n",
      " |        This method can only be called on a coalesced sparse tensor. See\n",
      " |        :meth:`Tensor.coalesce` for details.\n",
      " |  \n",
      " |  int(...)\n",
      " |      int() -> Tensor\n",
      " |      \n",
      " |      ``self.int()`` is equivalent to ``self.to(torch.int32)``. See :func:`to`.\n",
      " |  \n",
      " |  int_repr(...)\n",
      " |      int_repr() -> Tensor\n",
      " |      \n",
      " |      Given a quantized Tensor,\n",
      " |      ``self.int_repr()`` returns a CPU Tensor with uint8_t as data type that stores the\n",
      " |      underlying uint8_t values of the given Tensor.\n",
      " |  \n",
      " |  inverse(...)\n",
      " |      inverse() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.inverse`\n",
      " |  \n",
      " |  irfft(...)\n",
      " |      irfft(signal_ndim, normalized=False, onesided=True, signal_sizes=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.irfft`\n",
      " |  \n",
      " |  is_coalesced(...)\n",
      " |  \n",
      " |  is_complex(...)\n",
      " |  \n",
      " |  is_contiguous(...)\n",
      " |      is_contiguous() -> bool\n",
      " |      \n",
      " |      Returns True if :attr:`self` tensor is contiguous in memory in C order.\n",
      " |  \n",
      " |  is_distributed(...)\n",
      " |  \n",
      " |  is_floating_point(...)\n",
      " |      is_floating_point() -> bool\n",
      " |      \n",
      " |      Returns True if the data type of :attr:`self` is a floating point data type.\n",
      " |  \n",
      " |  is_nonzero(...)\n",
      " |  \n",
      " |  is_pinned(...)\n",
      " |      Returns true if this tensor resides in pinned memory.\n",
      " |  \n",
      " |  is_same_size(...)\n",
      " |  \n",
      " |  is_set_to(...)\n",
      " |      is_set_to(tensor) -> bool\n",
      " |      \n",
      " |      Returns True if this object refers to the same ``THTensor`` object from the\n",
      " |      Torch C API as the given tensor.\n",
      " |  \n",
      " |  is_signed(...)\n",
      " |      is_signed() -> bool\n",
      " |      \n",
      " |      Returns True if the data type of :attr:`self` is a signed data type.\n",
      " |  \n",
      " |  isclose(...)\n",
      " |  \n",
      " |  item(...)\n",
      " |      item() -> number\n",
      " |      \n",
      " |      Returns the value of this tensor as a standard Python number. This only works\n",
      " |      for tensors with one element. For other cases, see :meth:`~Tensor.tolist`.\n",
      " |      \n",
      " |      This operation is not differentiable.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([1.0])\n",
      " |          >>> x.item()\n",
      " |          1.0\n",
      " |  \n",
      " |  kthvalue(...)\n",
      " |      kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor)\n",
      " |      \n",
      " |      See :func:`torch.kthvalue`\n",
      " |  \n",
      " |  le(...)\n",
      " |      le(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.le`\n",
      " |  \n",
      " |  le_(...)\n",
      " |      le_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.le`\n",
      " |  \n",
      " |  lerp(...)\n",
      " |      lerp(end, weight) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.lerp`\n",
      " |  \n",
      " |  lerp_(...)\n",
      " |      lerp_(end, weight) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.lerp`\n",
      " |  \n",
      " |  lgamma(...)\n",
      " |      lgamma() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.lgamma`\n",
      " |  \n",
      " |  lgamma_(...)\n",
      " |      lgamma_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.lgamma`\n",
      " |  \n",
      " |  log(...)\n",
      " |      log() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.log`\n",
      " |  \n",
      " |  log10(...)\n",
      " |      log10() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.log10`\n",
      " |  \n",
      " |  log10_(...)\n",
      " |      log10_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.log10`\n",
      " |  \n",
      " |  log1p(...)\n",
      " |      log1p() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.log1p`\n",
      " |  \n",
      " |  log1p_(...)\n",
      " |      log1p_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.log1p`\n",
      " |  \n",
      " |  log2(...)\n",
      " |      log2() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.log2`\n",
      " |  \n",
      " |  log2_(...)\n",
      " |      log2_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.log2`\n",
      " |  \n",
      " |  log_(...)\n",
      " |      log_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.log`\n",
      " |  \n",
      " |  log_normal_(...)\n",
      " |      log_normal_(mean=1, std=2, *, generator=None)\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with numbers samples from the log-normal distribution\n",
      " |      parameterized by the given mean :math:`\\mu` and standard deviation\n",
      " |      :math:`\\sigma`. Note that :attr:`mean` and :attr:`std` are the mean and\n",
      " |      standard deviation of the underlying normal distribution, and not of the\n",
      " |      returned distribution:\n",
      " |      \n",
      " |      .. math::\n",
      " |      \n",
      " |          f(x) = \\dfrac{1}{x \\sigma \\sqrt{2\\pi}}\\ e^{-\\frac{(\\ln x - \\mu)^2}{2\\sigma^2}}\n",
      " |  \n",
      " |  log_softmax(...)\n",
      " |  \n",
      " |  logdet(...)\n",
      " |      logdet() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.logdet`\n",
      " |  \n",
      " |  logical_not(...)\n",
      " |      logical_not() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.logical_not`\n",
      " |  \n",
      " |  logical_not_(...)\n",
      " |      logical_not_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.logical_not`\n",
      " |  \n",
      " |  logical_xor(...)\n",
      " |      logical_xor() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.logical_xor`\n",
      " |  \n",
      " |  logical_xor_(...)\n",
      " |      logical_xor_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.logical_xor`\n",
      " |  \n",
      " |  logsumexp(...)\n",
      " |      logsumexp(dim, keepdim=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.logsumexp`\n",
      " |  \n",
      " |  long(...)\n",
      " |      long() -> Tensor\n",
      " |      \n",
      " |      ``self.long()`` is equivalent to ``self.to(torch.int64)``. See :func:`to`.\n",
      " |  \n",
      " |  lstsq(...)\n",
      " |      lstsq(A) -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.lstsq`\n",
      " |  \n",
      " |  lt(...)\n",
      " |      lt(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.lt`\n",
      " |  \n",
      " |  lt_(...)\n",
      " |      lt_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.lt`\n",
      " |  \n",
      " |  lu_solve(...)\n",
      " |      lu_solve(LU_data, LU_pivots) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.lu_solve`\n",
      " |  \n",
      " |  map2_(...)\n",
      " |  \n",
      " |  map_(...)\n",
      " |      map_(tensor, callable)\n",
      " |      \n",
      " |      Applies :attr:`callable` for each element in :attr:`self` tensor and the given\n",
      " |      :attr:`tensor` and stores the results in :attr:`self` tensor. :attr:`self` tensor and\n",
      " |      the given :attr:`tensor` must be :ref:`broadcastable <broadcasting-semantics>`.\n",
      " |      \n",
      " |      The :attr:`callable` should have the signature::\n",
      " |      \n",
      " |          def callable(a, b) -> number\n",
      " |  \n",
      " |  masked_fill(...)\n",
      " |      masked_fill(mask, value) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.masked_fill_`\n",
      " |  \n",
      " |  masked_fill_(...)\n",
      " |      masked_fill_(mask, value)\n",
      " |      \n",
      " |      Fills elements of :attr:`self` tensor with :attr:`value` where :attr:`mask` is\n",
      " |      True. The shape of :attr:`mask` must be\n",
      " |      :ref:`broadcastable <broadcasting-semantics>` with the shape of the underlying\n",
      " |      tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          mask (BoolTensor): the boolean mask\n",
      " |          value (float): the value to fill in with\n",
      " |  \n",
      " |  masked_scatter(...)\n",
      " |      masked_scatter(mask, tensor) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.masked_scatter_`\n",
      " |  \n",
      " |  masked_scatter_(...)\n",
      " |      masked_scatter_(mask, source)\n",
      " |      \n",
      " |      Copies elements from :attr:`source` into :attr:`self` tensor at positions where\n",
      " |      the :attr:`mask` is True.\n",
      " |      The shape of :attr:`mask` must be :ref:`broadcastable <broadcasting-semantics>`\n",
      " |      with the shape of the underlying tensor. The :attr:`source` should have at least\n",
      " |      as many elements as the number of ones in :attr:`mask`\n",
      " |      \n",
      " |      Args:\n",
      " |          mask (BoolTensor): the boolean mask\n",
      " |          source (Tensor): the tensor to copy from\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          The :attr:`mask` operates on the :attr:`self` tensor, not on the given\n",
      " |          :attr:`source` tensor.\n",
      " |  \n",
      " |  masked_select(...)\n",
      " |      masked_select(mask) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.masked_select`\n",
      " |  \n",
      " |  matmul(...)\n",
      " |      matmul(tensor2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.matmul`\n",
      " |  \n",
      " |  matrix_power(...)\n",
      " |      matrix_power(n) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.matrix_power`\n",
      " |  \n",
      " |  max(...)\n",
      " |      max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.max`\n",
      " |  \n",
      " |  mean(...)\n",
      " |      mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.mean`\n",
      " |  \n",
      " |  median(...)\n",
      " |      median(dim=None, keepdim=False) -> (Tensor, LongTensor)\n",
      " |      \n",
      " |      See :func:`torch.median`\n",
      " |  \n",
      " |  min(...)\n",
      " |      min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.min`\n",
      " |  \n",
      " |  mm(...)\n",
      " |      mm(mat2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.mm`\n",
      " |  \n",
      " |  mode(...)\n",
      " |      mode(dim=None, keepdim=False) -> (Tensor, LongTensor)\n",
      " |      \n",
      " |      See :func:`torch.mode`\n",
      " |  \n",
      " |  mul(...)\n",
      " |      mul(value) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.mul`\n",
      " |  \n",
      " |  mul_(...)\n",
      " |      mul_(value)\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.mul`\n",
      " |  \n",
      " |  multinomial(...)\n",
      " |      multinomial(num_samples, replacement=False, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.multinomial`\n",
      " |  \n",
      " |  mv(...)\n",
      " |      mv(vec) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.mv`\n",
      " |  \n",
      " |  mvlgamma(...)\n",
      " |      mvlgamma(p) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.mvlgamma`\n",
      " |  \n",
      " |  mvlgamma_(...)\n",
      " |      mvlgamma_(p) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.mvlgamma`\n",
      " |  \n",
      " |  narrow(...)\n",
      " |      narrow(dimension, start, length) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.narrow`\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n",
      " |          >>> x.narrow(0, 0, 2)\n",
      " |          tensor([[ 1,  2,  3],\n",
      " |                  [ 4,  5,  6]])\n",
      " |          >>> x.narrow(1, 1, 2)\n",
      " |          tensor([[ 2,  3],\n",
      " |                  [ 5,  6],\n",
      " |                  [ 8,  9]])\n",
      " |  \n",
      " |  narrow_copy(...)\n",
      " |      narrow_copy(dimension, start, length) -> Tensor\n",
      " |      \n",
      " |      Same as :meth:`Tensor.narrow` except returning a copy rather\n",
      " |      than shared storage.  This is primarily for sparse tensors, which\n",
      " |      do not have a shared-storage narrow method.  Calling ```narrow_copy``\n",
      " |      with ```dimemsion > self.sparse_dim()``` will return a copy with the\n",
      " |      relevant dense dimension narrowed, and ```self.shape``` updated accordingly.\n",
      " |  \n",
      " |  ndimension(...)\n",
      " |      ndimension() -> int\n",
      " |      \n",
      " |      Alias for :meth:`~Tensor.dim()`\n",
      " |  \n",
      " |  ne(...)\n",
      " |      ne(other) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ne`\n",
      " |  \n",
      " |  ne_(...)\n",
      " |      ne_(other) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.ne`\n",
      " |  \n",
      " |  neg(...)\n",
      " |      neg() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.neg`\n",
      " |  \n",
      " |  neg_(...)\n",
      " |      neg_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.neg`\n",
      " |  \n",
      " |  nelement(...)\n",
      " |      nelement() -> int\n",
      " |      \n",
      " |      Alias for :meth:`~Tensor.numel`\n",
      " |  \n",
      " |  new(...)\n",
      " |  \n",
      " |  new_empty(...)\n",
      " |      new_empty(size, dtype=None, device=None, requires_grad=False) -> Tensor\n",
      " |      \n",
      " |      Returns a Tensor of size :attr:`size` filled with uninitialized data.\n",
      " |      By default, the returned Tensor has the same :class:`torch.dtype` and\n",
      " |      :class:`torch.device` as this tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          dtype (:class:`torch.dtype`, optional): the desired type of returned tensor.\n",
      " |              Default: if None, same :class:`torch.dtype` as this tensor.\n",
      " |          device (:class:`torch.device`, optional): the desired device of returned tensor.\n",
      " |              Default: if None, same :class:`torch.device` as this tensor.\n",
      " |          requires_grad (bool, optional): If autograd should record operations on the\n",
      " |              returned tensor. Default: ``False``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.ones(())\n",
      " |          >>> tensor.new_empty((2, 3))\n",
      " |          tensor([[ 5.8182e-18,  4.5765e-41, -1.0545e+30],\n",
      " |                  [ 3.0949e-41,  4.4842e-44,  0.0000e+00]])\n",
      " |  \n",
      " |  new_full(...)\n",
      " |      new_full(size, fill_value, dtype=None, device=None, requires_grad=False) -> Tensor\n",
      " |      \n",
      " |      Returns a Tensor of size :attr:`size` filled with :attr:`fill_value`.\n",
      " |      By default, the returned Tensor has the same :class:`torch.dtype` and\n",
      " |      :class:`torch.device` as this tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          fill_value (scalar): the number to fill the output tensor with.\n",
      " |          dtype (:class:`torch.dtype`, optional): the desired type of returned tensor.\n",
      " |              Default: if None, same :class:`torch.dtype` as this tensor.\n",
      " |          device (:class:`torch.device`, optional): the desired device of returned tensor.\n",
      " |              Default: if None, same :class:`torch.device` as this tensor.\n",
      " |          requires_grad (bool, optional): If autograd should record operations on the\n",
      " |              returned tensor. Default: ``False``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.ones((2,), dtype=torch.float64)\n",
      " |          >>> tensor.new_full((3, 4), 3.141592)\n",
      " |          tensor([[ 3.1416,  3.1416,  3.1416,  3.1416],\n",
      " |                  [ 3.1416,  3.1416,  3.1416,  3.1416],\n",
      " |                  [ 3.1416,  3.1416,  3.1416,  3.1416]], dtype=torch.float64)\n",
      " |  \n",
      " |  new_ones(...)\n",
      " |      new_ones(size, dtype=None, device=None, requires_grad=False) -> Tensor\n",
      " |      \n",
      " |      Returns a Tensor of size :attr:`size` filled with ``1``.\n",
      " |      By default, the returned Tensor has the same :class:`torch.dtype` and\n",
      " |      :class:`torch.device` as this tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          size (int...): a list, tuple, or :class:`torch.Size` of integers defining the\n",
      " |              shape of the output tensor.\n",
      " |          dtype (:class:`torch.dtype`, optional): the desired type of returned tensor.\n",
      " |              Default: if None, same :class:`torch.dtype` as this tensor.\n",
      " |          device (:class:`torch.device`, optional): the desired device of returned tensor.\n",
      " |              Default: if None, same :class:`torch.device` as this tensor.\n",
      " |          requires_grad (bool, optional): If autograd should record operations on the\n",
      " |              returned tensor. Default: ``False``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.tensor((), dtype=torch.int32)\n",
      " |          >>> tensor.new_ones((2, 3))\n",
      " |          tensor([[ 1,  1,  1],\n",
      " |                  [ 1,  1,  1]], dtype=torch.int32)\n",
      " |  \n",
      " |  new_tensor(...)\n",
      " |      new_tensor(data, dtype=None, device=None, requires_grad=False) -> Tensor\n",
      " |      \n",
      " |      Returns a new Tensor with :attr:`data` as the tensor data.\n",
      " |      By default, the returned Tensor has the same :class:`torch.dtype` and\n",
      " |      :class:`torch.device` as this tensor.\n",
      " |      \n",
      " |      .. warning::\n",
      " |      \n",
      " |          :func:`new_tensor` always copies :attr:`data`. If you have a Tensor\n",
      " |          ``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`\n",
      " |          or :func:`torch.Tensor.detach`.\n",
      " |          If you have a numpy array and want to avoid a copy, use\n",
      " |          :func:`torch.from_numpy`.\n",
      " |      \n",
      " |      .. warning::\n",
      " |      \n",
      " |          When data is a tensor `x`, :func:`new_tensor()` reads out 'the data' from whatever it is passed,\n",
      " |          and constructs a leaf variable. Therefore ``tensor.new_tensor(x)`` is equivalent to ``x.clone().detach()``\n",
      " |          and ``tensor.new_tensor(x, requires_grad=True)`` is equivalent to ``x.clone().detach().requires_grad_(True)``.\n",
      " |          The equivalents using ``clone()`` and ``detach()`` are recommended.\n",
      " |      \n",
      " |      Args:\n",
      " |          data (array_like): The returned Tensor copies :attr:`data`.\n",
      " |          dtype (:class:`torch.dtype`, optional): the desired type of returned tensor.\n",
      " |              Default: if None, same :class:`torch.dtype` as this tensor.\n",
      " |          device (:class:`torch.device`, optional): the desired device of returned tensor.\n",
      " |              Default: if None, same :class:`torch.device` as this tensor.\n",
      " |          requires_grad (bool, optional): If autograd should record operations on the\n",
      " |              returned tensor. Default: ``False``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.ones((2,), dtype=torch.int8)\n",
      " |          >>> data = [[0, 1], [2, 3]]\n",
      " |          >>> tensor.new_tensor(data)\n",
      " |          tensor([[ 0,  1],\n",
      " |                  [ 2,  3]], dtype=torch.int8)\n",
      " |  \n",
      " |  new_zeros(...)\n",
      " |      new_zeros(size, dtype=None, device=None, requires_grad=False) -> Tensor\n",
      " |      \n",
      " |      Returns a Tensor of size :attr:`size` filled with ``0``.\n",
      " |      By default, the returned Tensor has the same :class:`torch.dtype` and\n",
      " |      :class:`torch.device` as this tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          size (int...): a list, tuple, or :class:`torch.Size` of integers defining the\n",
      " |              shape of the output tensor.\n",
      " |          dtype (:class:`torch.dtype`, optional): the desired type of returned tensor.\n",
      " |              Default: if None, same :class:`torch.dtype` as this tensor.\n",
      " |          device (:class:`torch.device`, optional): the desired device of returned tensor.\n",
      " |              Default: if None, same :class:`torch.device` as this tensor.\n",
      " |          requires_grad (bool, optional): If autograd should record operations on the\n",
      " |              returned tensor. Default: ``False``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.tensor((), dtype=torch.float64)\n",
      " |          >>> tensor.new_zeros((2, 3))\n",
      " |          tensor([[ 0.,  0.,  0.],\n",
      " |                  [ 0.,  0.,  0.]], dtype=torch.float64)\n",
      " |  \n",
      " |  nonzero(...)\n",
      " |      nonzero() -> LongTensor\n",
      " |      \n",
      " |      See :func:`torch.nonzero`\n",
      " |  \n",
      " |  normal_(...)\n",
      " |      normal_(mean=0, std=1, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with elements samples from the normal distribution\n",
      " |      parameterized by :attr:`mean` and :attr:`std`.\n",
      " |  \n",
      " |  numel(...)\n",
      " |      numel() -> int\n",
      " |      \n",
      " |      See :func:`torch.numel`\n",
      " |  \n",
      " |  numpy(...)\n",
      " |      numpy() -> numpy.ndarray\n",
      " |      \n",
      " |      Returns :attr:`self` tensor as a NumPy :class:`ndarray`. This tensor and the\n",
      " |      returned :class:`ndarray` share the same underlying storage. Changes to\n",
      " |      :attr:`self` tensor will be reflected in the :class:`ndarray` and vice versa.\n",
      " |  \n",
      " |  orgqr(...)\n",
      " |      orgqr(input2) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.orgqr`\n",
      " |  \n",
      " |  ormqr(...)\n",
      " |      ormqr(input2, input3, left=True, transpose=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.ormqr`\n",
      " |  \n",
      " |  permute(...)\n",
      " |      permute(*dims) -> Tensor\n",
      " |      \n",
      " |      Permute the dimensions of this tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          *dims (int...): The desired ordering of dimensions\n",
      " |      \n",
      " |      Example:\n",
      " |          >>> x = torch.randn(2, 3, 5)\n",
      " |          >>> x.size()\n",
      " |          torch.Size([2, 3, 5])\n",
      " |          >>> x.permute(2, 0, 1).size()\n",
      " |          torch.Size([5, 2, 3])\n",
      " |  \n",
      " |  pin_memory(...)\n",
      " |      pin_memory() -> Tensor\n",
      " |      \n",
      " |      Copies the tensor to pinned memory, if it's not already pinned.\n",
      " |  \n",
      " |  pinverse(...)\n",
      " |      pinverse() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.pinverse`\n",
      " |  \n",
      " |  polygamma(...)\n",
      " |      polygamma(n) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.polygamma`\n",
      " |  \n",
      " |  polygamma_(...)\n",
      " |      polygamma_(n) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.polygamma`\n",
      " |  \n",
      " |  pow(...)\n",
      " |      pow(exponent) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.pow`\n",
      " |  \n",
      " |  pow_(...)\n",
      " |      pow_(exponent) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.pow`\n",
      " |  \n",
      " |  prelu(...)\n",
      " |  \n",
      " |  prod(...)\n",
      " |      prod(dim=None, keepdim=False, dtype=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.prod`\n",
      " |  \n",
      " |  put_(...)\n",
      " |      put_(indices, tensor, accumulate=False) -> Tensor\n",
      " |      \n",
      " |      Copies the elements from :attr:`tensor` into the positions specified by\n",
      " |      indices. For the purpose of indexing, the :attr:`self` tensor is treated as if\n",
      " |      it were a 1-D tensor.\n",
      " |      \n",
      " |      If :attr:`accumulate` is ``True``, the elements in :attr:`tensor` are added to\n",
      " |      :attr:`self`. If accumulate is ``False``, the behavior is undefined if indices\n",
      " |      contain duplicate elements.\n",
      " |      \n",
      " |      Args:\n",
      " |          indices (LongTensor): the indices into self\n",
      " |          tensor (Tensor): the tensor containing values to copy from\n",
      " |          accumulate (bool): whether to accumulate into self\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> src = torch.tensor([[4, 3, 5],\n",
      " |                                  [6, 7, 8]])\n",
      " |          >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))\n",
      " |          tensor([[  4,   9,   5],\n",
      " |                  [ 10,   7,   8]])\n",
      " |  \n",
      " |  q_per_channel_axis(...)\n",
      " |      q_per_channel_axis() -> int\n",
      " |      \n",
      " |      Given a Tensor quantized by linear (affine) per-channel quantization,\n",
      " |      returns the index of dimension on which per-channel quantization is applied.\n",
      " |  \n",
      " |  q_per_channel_scales(...)\n",
      " |      q_per_channel_scales() -> Tensor\n",
      " |      \n",
      " |      Given a Tensor quantized by linear (affine) per-channel quantization,\n",
      " |      returns a Tensor of scales of the underlying quantizer. It has the number of\n",
      " |      elements that matches the corresponding dimensions (from q_per_channel_axis) of\n",
      " |      the tensor.\n",
      " |  \n",
      " |  q_per_channel_zero_points(...)\n",
      " |      q_per_channel_zero_points() -> Tensor\n",
      " |      \n",
      " |      Given a Tensor quantized by linear (affine) per-channel quantization,\n",
      " |      returns a tensor of zero_points of the underlying quantizer. It has the number of\n",
      " |      elements that matches the corresponding dimensions (from q_per_channel_axis) of\n",
      " |      the tensor.\n",
      " |  \n",
      " |  q_scale(...)\n",
      " |      q_scale() -> float\n",
      " |      \n",
      " |      Given a Tensor quantized by linear(affine) quantization,\n",
      " |      returns the scale of the underlying quantizer().\n",
      " |  \n",
      " |  q_zero_point(...)\n",
      " |      q_zero_point() -> int\n",
      " |      \n",
      " |      Given a Tensor quantized by linear(affine) quantization,\n",
      " |      returns the zero_point of the underlying quantizer().\n",
      " |  \n",
      " |  qr(...)\n",
      " |      qr(some=True) -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.qr`\n",
      " |  \n",
      " |  qscheme(...)\n",
      " |      qscheme() -> torch.qscheme\n",
      " |      \n",
      " |      Returns the quantization scheme of a given QTensor.\n",
      " |  \n",
      " |  random_(...)\n",
      " |      random_(from=0, to=None, *, generator=None) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with numbers sampled from the discrete uniform\n",
      " |      distribution over ``[from, to - 1]``. If not specified, the values are usually\n",
      " |      only bounded by :attr:`self` tensor's data type. However, for floating point\n",
      " |      types, if unspecified, range will be ``[0, 2^mantissa]`` to ensure that every\n",
      " |      value is representable. For example, `torch.tensor(1, dtype=torch.double).random_()`\n",
      " |      will be uniform in ``[0, 2^53]``.\n",
      " |  \n",
      " |  reciprocal(...)\n",
      " |      reciprocal() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.reciprocal`\n",
      " |  \n",
      " |  reciprocal_(...)\n",
      " |      reciprocal_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.reciprocal`\n",
      " |  \n",
      " |  record_stream(...)\n",
      " |      record_stream(stream)\n",
      " |      \n",
      " |      Ensures that the tensor memory is not reused for another tensor until all\n",
      " |      current work queued on :attr:`stream` are complete.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          The caching allocator is aware of only the stream where a tensor was\n",
      " |          allocated. Due to the awareness, it already correctly manages the life\n",
      " |          cycle of tensors on only one stream. But if a tensor is used on a stream\n",
      " |          different from the stream of origin, the allocator might reuse the memory\n",
      " |          unexpectedly. Calling this method lets the allocator know which streams\n",
      " |          have used the tensor.\n",
      " |  \n",
      " |  relu(...)\n",
      " |  \n",
      " |  relu_(...)\n",
      " |  \n",
      " |  remainder(...)\n",
      " |      remainder(divisor) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.remainder`\n",
      " |  \n",
      " |  remainder_(...)\n",
      " |      remainder_(divisor) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.remainder`\n",
      " |  \n",
      " |  renorm(...)\n",
      " |      renorm(p, dim, maxnorm) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.renorm`\n",
      " |  \n",
      " |  renorm_(...)\n",
      " |      renorm_(p, dim, maxnorm) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.renorm`\n",
      " |  \n",
      " |  repeat(...)\n",
      " |      repeat(*sizes) -> Tensor\n",
      " |      \n",
      " |      Repeats this tensor along the specified dimensions.\n",
      " |      \n",
      " |      Unlike :meth:`~Tensor.expand`, this function copies the tensor's data.\n",
      " |      \n",
      " |      .. warning::\n",
      " |      \n",
      " |          :func:`torch.repeat` behaves differently from\n",
      " |          `numpy.repeat <https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html>`_,\n",
      " |          but is more similar to\n",
      " |          `numpy.tile <https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html>`_.\n",
      " |          For the operator similar to `numpy.repeat`, see :func:`torch.repeat_interleave`.\n",
      " |      \n",
      " |      Args:\n",
      " |          sizes (torch.Size or int...): The number of times to repeat this tensor along each\n",
      " |              dimension\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([1, 2, 3])\n",
      " |          >>> x.repeat(4, 2)\n",
      " |          tensor([[ 1,  2,  3,  1,  2,  3],\n",
      " |                  [ 1,  2,  3,  1,  2,  3],\n",
      " |                  [ 1,  2,  3,  1,  2,  3],\n",
      " |                  [ 1,  2,  3,  1,  2,  3]])\n",
      " |          >>> x.repeat(4, 2, 1).size()\n",
      " |          torch.Size([4, 2, 3])\n",
      " |  \n",
      " |  repeat_interleave(...)\n",
      " |      repeat_interleave(repeats, dim=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.repeat_interleave`.\n",
      " |  \n",
      " |  requires_grad_(...)\n",
      " |      requires_grad_(requires_grad=True) -> Tensor\n",
      " |      \n",
      " |      Change if autograd should record operations on this tensor: sets this tensor's\n",
      " |      :attr:`requires_grad` attribute in-place. Returns this tensor.\n",
      " |      \n",
      " |      :func:`requires_grad_`'s main use case is to tell autograd to begin recording\n",
      " |      operations on a Tensor ``tensor``. If ``tensor`` has ``requires_grad=False``\n",
      " |      (because it was obtained through a DataLoader, or required preprocessing or\n",
      " |      initialization), ``tensor.requires_grad_()`` makes it so that autograd will\n",
      " |      begin to record operations on ``tensor``.\n",
      " |      \n",
      " |      Args:\n",
      " |          requires_grad (bool): If autograd should record operations on this tensor.\n",
      " |              Default: ``True``.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> # Let's say we want to preprocess some saved weights and use\n",
      " |          >>> # the result as new weights.\n",
      " |          >>> saved_weights = [0.1, 0.2, 0.3, 0.25]\n",
      " |          >>> loaded_weights = torch.tensor(saved_weights)\n",
      " |          >>> weights = preprocess(loaded_weights)  # some function\n",
      " |          >>> weights\n",
      " |          tensor([-0.5503,  0.4926, -2.1158, -0.8303])\n",
      " |      \n",
      " |          >>> # Now, start to record operations done to weights\n",
      " |          >>> weights.requires_grad_()\n",
      " |          >>> out = weights.pow(2).sum()\n",
      " |          >>> out.backward()\n",
      " |          >>> weights.grad\n",
      " |          tensor([-1.1007,  0.9853, -4.2316, -1.6606])\n",
      " |  \n",
      " |  reshape(...)\n",
      " |      reshape(*shape) -> Tensor\n",
      " |      \n",
      " |      Returns a tensor with the same data and number of elements as :attr:`self`\n",
      " |      but with the specified shape. This method returns a view if :attr:`shape` is\n",
      " |      compatible with the current shape. See :meth:`torch.Tensor.view` on when it is\n",
      " |      possible to return a view.\n",
      " |      \n",
      " |      See :func:`torch.reshape`\n",
      " |      \n",
      " |      Args:\n",
      " |          shape (tuple of ints or int...): the desired shape\n",
      " |  \n",
      " |  reshape_as(...)\n",
      " |      reshape_as(other) -> Tensor\n",
      " |      \n",
      " |      Returns this tensor as the same shape as :attr:`other`.\n",
      " |      ``self.reshape_as(other)`` is equivalent to ``self.reshape(other.sizes())``.\n",
      " |      This method returns a view if ``other.sizes()`` is compatible with the current\n",
      " |      shape. See :meth:`torch.Tensor.view` on when it is possible to return a view.\n",
      " |      \n",
      " |      Please see :meth:`reshape` for more information about ``reshape``.\n",
      " |      \n",
      " |      Args:\n",
      " |          other (:class:`torch.Tensor`): The result tensor has the same shape\n",
      " |              as :attr:`other`.\n",
      " |  \n",
      " |  resize_(...)\n",
      " |      resize_(*sizes) -> Tensor\n",
      " |      \n",
      " |      Resizes :attr:`self` tensor to the specified size. If the number of elements is\n",
      " |      larger than the current storage size, then the underlying storage is resized\n",
      " |      to fit the new number of elements. If the number of elements is smaller, the\n",
      " |      underlying storage is not changed. Existing elements are preserved but any new\n",
      " |      memory is uninitialized.\n",
      " |      \n",
      " |      .. warning::\n",
      " |      \n",
      " |          This is a low-level method. The storage is reinterpreted as C-contiguous,\n",
      " |          ignoring the current strides (unless the target size equals the current\n",
      " |          size, in which case the tensor is left unchanged). For most purposes, you\n",
      " |          will instead want to use :meth:`~Tensor.view()`, which checks for\n",
      " |          contiguity, or :meth:`~Tensor.reshape()`, which copies data if needed. To\n",
      " |          change the size in-place with custom strides, see :meth:`~Tensor.set_()`.\n",
      " |      \n",
      " |      Args:\n",
      " |          sizes (torch.Size or int...): the desired size\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])\n",
      " |          >>> x.resize_(2, 2)\n",
      " |          tensor([[ 1,  2],\n",
      " |                  [ 3,  4]])\n",
      " |  \n",
      " |  resize_as_(...)\n",
      " |      resize_as_(tensor) -> Tensor\n",
      " |      \n",
      " |      Resizes the :attr:`self` tensor to be the same size as the specified\n",
      " |      :attr:`tensor`. This is equivalent to ``self.resize_(tensor.size())``.\n",
      " |  \n",
      " |  rfft(...)\n",
      " |      rfft(signal_ndim, normalized=False, onesided=True) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.rfft`\n",
      " |  \n",
      " |  roll(...)\n",
      " |      roll(shifts, dims) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.roll`\n",
      " |  \n",
      " |  rot90(...)\n",
      " |      rot90(k, dims) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.rot90`\n",
      " |  \n",
      " |  round(...)\n",
      " |      round() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.round`\n",
      " |  \n",
      " |  round_(...)\n",
      " |      round_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.round`\n",
      " |  \n",
      " |  rsqrt(...)\n",
      " |      rsqrt() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.rsqrt`\n",
      " |  \n",
      " |  rsqrt_(...)\n",
      " |      rsqrt_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.rsqrt`\n",
      " |  \n",
      " |  scatter(...)\n",
      " |      scatter(dim, index, source) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.scatter_`\n",
      " |  \n",
      " |  scatter_(...)\n",
      " |      scatter_(dim, index, src) -> Tensor\n",
      " |      \n",
      " |      Writes all values from the tensor :attr:`src` into :attr:`self` at the indices\n",
      " |      specified in the :attr:`index` tensor. For each value in :attr:`src`, its output\n",
      " |      index is specified by its index in :attr:`src` for ``dimension != dim`` and by\n",
      " |      the corresponding value in :attr:`index` for ``dimension = dim``.\n",
      " |      \n",
      " |      For a 3-D tensor, :attr:`self` is updated as::\n",
      " |      \n",
      " |          self[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0\n",
      " |          self[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1\n",
      " |          self[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2\n",
      " |      \n",
      " |      This is the reverse operation of the manner described in :meth:`~Tensor.gather`.\n",
      " |      \n",
      " |      :attr:`self`, :attr:`index` and :attr:`src` (if it is a Tensor) should have same\n",
      " |      number of dimensions. It is also required that ``index.size(d) <= src.size(d)``\n",
      " |      for all dimensions ``d``, and that ``index.size(d) <= self.size(d)`` for all\n",
      " |      dimensions ``d != dim``.\n",
      " |      \n",
      " |      Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be\n",
      " |      between ``0`` and ``self.size(dim) - 1`` inclusive, and all values in a row\n",
      " |      along the specified dimension :attr:`dim` must be unique.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): the axis along which to index\n",
      " |          index (LongTensor): the indices of elements to scatter,\n",
      " |            can be either empty or the same size of src.\n",
      " |            When empty, the operation returns identity\n",
      " |          src (Tensor): the source element(s) to scatter,\n",
      " |            incase `value` is not specified\n",
      " |          value (float): the source element(s) to scatter,\n",
      " |            incase `src` is not specified\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.rand(2, 5)\n",
      " |          >>> x\n",
      " |          tensor([[ 0.3992,  0.2908,  0.9044,  0.4850,  0.6004],\n",
      " |                  [ 0.5735,  0.9006,  0.6797,  0.4152,  0.1732]])\n",
      " |          >>> torch.zeros(3, 5).scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)\n",
      " |          tensor([[ 0.3992,  0.9006,  0.6797,  0.4850,  0.6004],\n",
      " |                  [ 0.0000,  0.2908,  0.0000,  0.4152,  0.0000],\n",
      " |                  [ 0.5735,  0.0000,  0.9044,  0.0000,  0.1732]])\n",
      " |      \n",
      " |          >>> z = torch.zeros(2, 4).scatter_(1, torch.tensor([[2], [3]]), 1.23)\n",
      " |          >>> z\n",
      " |          tensor([[ 0.0000,  0.0000,  1.2300,  0.0000],\n",
      " |                  [ 0.0000,  0.0000,  0.0000,  1.2300]])\n",
      " |  \n",
      " |  scatter_add(...)\n",
      " |      scatter_add(dim, index, source) -> Tensor\n",
      " |      \n",
      " |      Out-of-place version of :meth:`torch.Tensor.scatter_add_`\n",
      " |  \n",
      " |  scatter_add_(...)\n",
      " |      scatter_add_(dim, index, other) -> Tensor\n",
      " |      \n",
      " |      Adds all values from the tensor :attr:`other` into :attr:`self` at the indices\n",
      " |      specified in the :attr:`index` tensor in a similar fashion as\n",
      " |      :meth:`~torch.Tensor.scatter_`. For each value in :attr:`other`, it is added to\n",
      " |      an index in :attr:`self` which is specified by its index in :attr:`other`\n",
      " |      for ``dimension != dim`` and by the corresponding value in :attr:`index` for\n",
      " |      ``dimension = dim``.\n",
      " |      \n",
      " |      For a 3-D tensor, :attr:`self` is updated as::\n",
      " |      \n",
      " |          self[index[i][j][k]][j][k] += other[i][j][k]  # if dim == 0\n",
      " |          self[i][index[i][j][k]][k] += other[i][j][k]  # if dim == 1\n",
      " |          self[i][j][index[i][j][k]] += other[i][j][k]  # if dim == 2\n",
      " |      \n",
      " |      :attr:`self`, :attr:`index` and :attr:`other` should have same number of\n",
      " |      dimensions. It is also required that ``index.size(d) <= other.size(d)`` for all\n",
      " |      dimensions ``d``, and that ``index.size(d) <= self.size(d)`` for all dimensions\n",
      " |      ``d != dim``.\n",
      " |      \n",
      " |      Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be\n",
      " |      between ``0`` and ``self.size(dim) - 1`` inclusive, and all values in a row along\n",
      " |      the specified dimension :attr:`dim` must be unique.\n",
      " |      \n",
      " |      .. include:: cuda_deterministic.rst\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): the axis along which to index\n",
      " |          index (LongTensor): the indices of elements to scatter and add,\n",
      " |            can be either empty or the same size of src.\n",
      " |            When empty, the operation returns identity.\n",
      " |          other (Tensor): the source elements to scatter and add\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.rand(2, 5)\n",
      " |          >>> x\n",
      " |          tensor([[0.7404, 0.0427, 0.6480, 0.3806, 0.8328],\n",
      " |                  [0.7953, 0.2009, 0.9154, 0.6782, 0.9620]])\n",
      " |          >>> torch.ones(3, 5).scatter_add_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)\n",
      " |          tensor([[1.7404, 1.2009, 1.9154, 1.3806, 1.8328],\n",
      " |                  [1.0000, 1.0427, 1.0000, 1.6782, 1.0000],\n",
      " |                  [1.7953, 1.0000, 1.6480, 1.0000, 1.9620]])\n",
      " |  \n",
      " |  select(...)\n",
      " |      select(dim, index) -> Tensor\n",
      " |      \n",
      " |      Slices the :attr:`self` tensor along the selected dimension at the given index.\n",
      " |      This function returns a tensor with the given dimension removed.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int): the dimension to slice\n",
      " |          index (int): the index to select with\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          :meth:`select` is equivalent to slicing. For example,\n",
      " |          ``tensor.select(0, index)`` is equivalent to ``tensor[index]`` and\n",
      " |          ``tensor.select(2, index)`` is equivalent to ``tensor[:,:,index]``.\n",
      " |  \n",
      " |  set_(...)\n",
      " |      set_(source=None, storage_offset=0, size=None, stride=None) -> Tensor\n",
      " |      \n",
      " |      Sets the underlying storage, size, and strides. If :attr:`source` is a tensor,\n",
      " |      :attr:`self` tensor will share the same storage and have the same size and\n",
      " |      strides as :attr:`source`. Changes to elements in one tensor will be reflected\n",
      " |      in the other.\n",
      " |      \n",
      " |      If :attr:`source` is a :class:`~torch.Storage`, the method sets the underlying\n",
      " |      storage, offset, size, and stride.\n",
      " |      \n",
      " |      Args:\n",
      " |          source (Tensor or Storage): the tensor or storage to use\n",
      " |          storage_offset (int, optional): the offset in the storage\n",
      " |          size (torch.Size, optional): the desired size. Defaults to the size of the source.\n",
      " |          stride (tuple, optional): the desired stride. Defaults to C-contiguous strides.\n",
      " |  \n",
      " |  short(...)\n",
      " |      short() -> Tensor\n",
      " |      \n",
      " |      ``self.short()`` is equivalent to ``self.to(torch.int16)``. See :func:`to`.\n",
      " |  \n",
      " |  sigmoid(...)\n",
      " |      sigmoid() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sigmoid`\n",
      " |  \n",
      " |  sigmoid_(...)\n",
      " |      sigmoid_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sigmoid`\n",
      " |  \n",
      " |  sign(...)\n",
      " |      sign() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sign`\n",
      " |  \n",
      " |  sign_(...)\n",
      " |      sign_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sign`\n",
      " |  \n",
      " |  sin(...)\n",
      " |      sin() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sin`\n",
      " |  \n",
      " |  sin_(...)\n",
      " |      sin_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sin`\n",
      " |  \n",
      " |  sinh(...)\n",
      " |      sinh() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sinh`\n",
      " |  \n",
      " |  sinh_(...)\n",
      " |      sinh_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sinh`\n",
      " |  \n",
      " |  size(...)\n",
      " |      size() -> torch.Size\n",
      " |      \n",
      " |      Returns the size of the :attr:`self` tensor. The returned value is a subclass of\n",
      " |      :class:`tuple`.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> torch.empty(3, 4, 5).size()\n",
      " |          torch.Size([3, 4, 5])\n",
      " |  \n",
      " |  slogdet(...)\n",
      " |      slogdet() -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.slogdet`\n",
      " |  \n",
      " |  smm(...)\n",
      " |  \n",
      " |  softmax(...)\n",
      " |  \n",
      " |  solve(...)\n",
      " |      solve(A) -> Tensor, Tensor\n",
      " |      \n",
      " |      See :func:`torch.solve`\n",
      " |  \n",
      " |  sort(...)\n",
      " |      sort(dim=-1, descending=False) -> (Tensor, LongTensor)\n",
      " |      \n",
      " |      See :func:`torch.sort`\n",
      " |  \n",
      " |  sparse_dim(...)\n",
      " |      sparse_dim() -> int\n",
      " |      \n",
      " |      If :attr:`self` is a sparse COO tensor (i.e., with ``torch.sparse_coo`` layout),\n",
      " |      this returns the number of sparse dimensions. Otherwise, this throws an error.\n",
      " |      \n",
      " |      See also :meth:`Tensor.dense_dim`.\n",
      " |  \n",
      " |  sparse_mask(...)\n",
      " |      sparse_mask(input, mask) -> Tensor\n",
      " |      \n",
      " |      Returns a new SparseTensor with values from Tensor :attr:`input` filtered\n",
      " |      by indices of :attr:`mask` and values are ignored. :attr:`input` and :attr:`mask`\n",
      " |      must have the same shape.\n",
      " |      \n",
      " |      Args:\n",
      " |          input (Tensor): an input Tensor\n",
      " |          mask (SparseTensor): a SparseTensor which we filter :attr:`input` based on its indices\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> nnz = 5\n",
      " |          >>> dims = [5, 5, 2, 2]\n",
      " |          >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),\n",
      " |                             torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)\n",
      " |          >>> V = torch.randn(nnz, dims[2], dims[3])\n",
      " |          >>> size = torch.Size(dims)\n",
      " |          >>> S = torch.sparse_coo_tensor(I, V, size).coalesce()\n",
      " |          >>> D = torch.randn(dims)\n",
      " |          >>> D.sparse_mask(S)\n",
      " |          tensor(indices=tensor([[0, 0, 0, 2],\n",
      " |                                 [0, 1, 4, 3]]),\n",
      " |                 values=tensor([[[ 1.6550,  0.2397],\n",
      " |                                 [-0.1611, -0.0779]],\n",
      " |      \n",
      " |                                [[ 0.2326, -1.0558],\n",
      " |                                 [ 1.4711,  1.9678]],\n",
      " |      \n",
      " |                                [[-0.5138, -0.0411],\n",
      " |                                 [ 1.9417,  0.5158]],\n",
      " |      \n",
      " |                                [[ 0.0793,  0.0036],\n",
      " |                                 [-0.2569, -0.1055]]]),\n",
      " |                 size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)\n",
      " |  \n",
      " |  sparse_resize_(...)\n",
      " |  \n",
      " |  sparse_resize_and_clear_(...)\n",
      " |  \n",
      " |  split_with_sizes(...)\n",
      " |  \n",
      " |  sqrt(...)\n",
      " |      sqrt() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sqrt`\n",
      " |  \n",
      " |  sqrt_(...)\n",
      " |      sqrt_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sqrt`\n",
      " |  \n",
      " |  squeeze(...)\n",
      " |      squeeze(dim=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.squeeze`\n",
      " |  \n",
      " |  squeeze_(...)\n",
      " |      squeeze_(dim=None) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.squeeze`\n",
      " |  \n",
      " |  sspaddmm(...)\n",
      " |  \n",
      " |  std(...)\n",
      " |      std(dim=None, unbiased=True, keepdim=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.std`\n",
      " |  \n",
      " |  storage(...)\n",
      " |      storage() -> torch.Storage\n",
      " |      \n",
      " |      Returns the underlying storage.\n",
      " |  \n",
      " |  storage_offset(...)\n",
      " |      storage_offset() -> int\n",
      " |      \n",
      " |      Returns :attr:`self` tensor's offset in the underlying storage in terms of\n",
      " |      number of storage elements (not bytes).\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([1, 2, 3, 4, 5])\n",
      " |          >>> x.storage_offset()\n",
      " |          0\n",
      " |          >>> x[3:].storage_offset()\n",
      " |          3\n",
      " |  \n",
      " |  storage_type(...)\n",
      " |      storage_type() -> type\n",
      " |      \n",
      " |      Returns the type of the underlying storage.\n",
      " |  \n",
      " |  stride(...)\n",
      " |      stride(dim) -> tuple or int\n",
      " |      \n",
      " |      Returns the stride of :attr:`self` tensor.\n",
      " |      \n",
      " |      Stride is the jump necessary to go from one element to the next one in the\n",
      " |      specified dimension :attr:`dim`. A tuple of all strides is returned when no\n",
      " |      argument is passed in. Otherwise, an integer value is returned as the stride in\n",
      " |      the particular dimension :attr:`dim`.\n",
      " |      \n",
      " |      Args:\n",
      " |          dim (int, optional): the desired dimension in which stride is required\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])\n",
      " |          >>> x.stride()\n",
      " |          (5, 1)\n",
      " |          >>>x.stride(0)\n",
      " |          5\n",
      " |          >>> x.stride(-1)\n",
      " |          1\n",
      " |  \n",
      " |  sub(...)\n",
      " |      sub(value, other) -> Tensor\n",
      " |      \n",
      " |      Subtracts a scalar or tensor from :attr:`self` tensor. If both :attr:`value` and\n",
      " |      :attr:`other` are specified, each element of :attr:`other` is scaled by\n",
      " |      :attr:`value` before being used.\n",
      " |      \n",
      " |      When :attr:`other` is a tensor, the shape of :attr:`other` must be\n",
      " |      :ref:`broadcastable <broadcasting-semantics>` with the shape of the underlying\n",
      " |      tensor.\n",
      " |  \n",
      " |  sub_(...)\n",
      " |      sub_(x) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.sub`\n",
      " |  \n",
      " |  sum(...)\n",
      " |      sum(dim=None, keepdim=False, dtype=None) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.sum`\n",
      " |  \n",
      " |  sum_to_size(...)\n",
      " |      sum_to_size(*size) -> Tensor\n",
      " |      \n",
      " |      Sum ``this`` tensor to :attr:`size`.\n",
      " |      :attr:`size` must be broadcastable to ``this`` tensor size.\n",
      " |      \n",
      " |      Args:\n",
      " |          size (int...): a sequence of integers defining the shape of the output tensor.\n",
      " |  \n",
      " |  svd(...)\n",
      " |      svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.svd`\n",
      " |  \n",
      " |  symeig(...)\n",
      " |      symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.symeig`\n",
      " |  \n",
      " |  t(...)\n",
      " |      t() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.t`\n",
      " |  \n",
      " |  t_(...)\n",
      " |      t_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.t`\n",
      " |  \n",
      " |  take(...)\n",
      " |      take(indices) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.take`\n",
      " |  \n",
      " |  tan(...)\n",
      " |      tan() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.tan`\n",
      " |  \n",
      " |  tan_(...)\n",
      " |      tan_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.tan`\n",
      " |  \n",
      " |  tanh(...)\n",
      " |      tanh() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.tanh`\n",
      " |  \n",
      " |  tanh_(...)\n",
      " |      tanh_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.tanh`\n",
      " |  \n",
      " |  to(...)\n",
      " |      to(*args, **kwargs) -> Tensor\n",
      " |      \n",
      " |      Performs Tensor dtype and/or device conversion. A :class:`torch.dtype` and :class:`torch.device` are\n",
      " |      inferred from the arguments of ``self.to(*args, **kwargs)``.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          If the ``self`` Tensor already\n",
      " |          has the correct :class:`torch.dtype` and :class:`torch.device`, then ``self`` is returned.\n",
      " |          Otherwise, the returned tensor is a copy of ``self`` with the desired\n",
      " |          :class:`torch.dtype` and :class:`torch.device`.\n",
      " |      \n",
      " |      Here are the ways to call ``to``:\n",
      " |      \n",
      " |      .. function:: to(dtype, non_blocking=False, copy=False) -> Tensor\n",
      " |      \n",
      " |          Returns a Tensor with the specified :attr:`dtype`\n",
      " |      \n",
      " |      .. function:: to(device=None, dtype=None, non_blocking=False, copy=False) -> Tensor\n",
      " |      \n",
      " |          Returns a Tensor with the specified :attr:`device` and (optional)\n",
      " |          :attr:`dtype`. If :attr:`dtype` is ``None`` it is inferred to be ``self.dtype``.\n",
      " |          When :attr:`non_blocking`, tries to convert asynchronously with respect to\n",
      " |          the host if possible, e.g., converting a CPU Tensor with pinned memory to a\n",
      " |          CUDA Tensor.\n",
      " |          When :attr:`copy` is set, a new Tensor is created even when the Tensor\n",
      " |          already matches the desired conversion.\n",
      " |      \n",
      " |      .. function:: to(other, non_blocking=False, copy=False) -> Tensor\n",
      " |      \n",
      " |          Returns a Tensor with same :class:`torch.dtype` and :class:`torch.device` as\n",
      " |          the Tensor :attr:`other`. When :attr:`non_blocking`, tries to convert\n",
      " |          asynchronously with respect to the host if possible, e.g., converting a CPU\n",
      " |          Tensor with pinned memory to a CUDA Tensor.\n",
      " |          When :attr:`copy` is set, a new Tensor is created even when the Tensor\n",
      " |          already matches the desired conversion.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> tensor = torch.randn(2, 2)  # Initially dtype=float32, device=cpu\n",
      " |          >>> tensor.to(torch.float64)\n",
      " |          tensor([[-0.5044,  0.0005],\n",
      " |                  [ 0.3310, -0.0584]], dtype=torch.float64)\n",
      " |      \n",
      " |          >>> cuda0 = torch.device('cuda:0')\n",
      " |          >>> tensor.to(cuda0)\n",
      " |          tensor([[-0.5044,  0.0005],\n",
      " |                  [ 0.3310, -0.0584]], device='cuda:0')\n",
      " |      \n",
      " |          >>> tensor.to(cuda0, dtype=torch.float64)\n",
      " |          tensor([[-0.5044,  0.0005],\n",
      " |                  [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')\n",
      " |      \n",
      " |          >>> other = torch.randn((), dtype=torch.float64, device=cuda0)\n",
      " |          >>> tensor.to(other, non_blocking=True)\n",
      " |          tensor([[-0.5044,  0.0005],\n",
      " |                  [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')\n",
      " |  \n",
      " |  to_dense(...)\n",
      " |  \n",
      " |  to_mkldnn(...)\n",
      " |      to_mkldnn() -> Tensor\n",
      " |      Returns a copy of the tensor in ``torch.mkldnn`` layout.\n",
      " |  \n",
      " |  to_sparse(...)\n",
      " |      to_sparse(sparseDims) -> Tensor\n",
      " |      Returns a sparse copy of the tensor.  PyTorch supports sparse tensors in\n",
      " |      :ref:`coordinate format <sparse-docs>`.\n",
      " |      \n",
      " |      Args:\n",
      " |          sparseDims (int, optional): the number of sparse dimensions to include in the new sparse tensor\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])\n",
      " |          >>> d\n",
      " |          tensor([[ 0,  0,  0],\n",
      " |                  [ 9,  0, 10],\n",
      " |                  [ 0,  0,  0]])\n",
      " |          >>> d.to_sparse()\n",
      " |          tensor(indices=tensor([[1, 1],\n",
      " |                                 [0, 2]]),\n",
      " |                 values=tensor([ 9, 10]),\n",
      " |                 size=(3, 3), nnz=2, layout=torch.sparse_coo)\n",
      " |          >>> d.to_sparse(1)\n",
      " |          tensor(indices=tensor([[1]]),\n",
      " |                 values=tensor([[ 9,  0, 10]]),\n",
      " |                 size=(3, 3), nnz=1, layout=torch.sparse_coo)\n",
      " |  \n",
      " |  tolist(...)\n",
      " |      \"\n",
      " |      tolist() -> list or number\n",
      " |      \n",
      " |      Returns the tensor as a (nested) list. For scalars, a standard\n",
      " |      Python number is returned, just like with :meth:`~Tensor.item`.\n",
      " |      Tensors are automatically moved to the CPU first if necessary.\n",
      " |      \n",
      " |      This operation is not differentiable.\n",
      " |      \n",
      " |      Examples::\n",
      " |      \n",
      " |          >>> a = torch.randn(2, 2)\n",
      " |          >>> a.tolist()\n",
      " |          [[0.012766935862600803, 0.5415473580360413],\n",
      " |           [-0.08909505605697632, 0.7729271650314331]]\n",
      " |          >>> a[0,0].tolist()\n",
      " |          0.012766935862600803\n",
      " |  \n",
      " |  topk(...)\n",
      " |      topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)\n",
      " |      \n",
      " |      See :func:`torch.topk`\n",
      " |  \n",
      " |  trace(...)\n",
      " |      trace() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.trace`\n",
      " |  \n",
      " |  transpose(...)\n",
      " |      transpose(dim0, dim1) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.transpose`\n",
      " |  \n",
      " |  transpose_(...)\n",
      " |      transpose_(dim0, dim1) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.transpose`\n",
      " |  \n",
      " |  triangular_solve(...)\n",
      " |      triangular_solve(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)\n",
      " |      \n",
      " |      See :func:`torch.triangular_solve`\n",
      " |  \n",
      " |  tril(...)\n",
      " |      tril(k=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.tril`\n",
      " |  \n",
      " |  tril_(...)\n",
      " |      tril_(k=0) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.tril`\n",
      " |  \n",
      " |  triu(...)\n",
      " |      triu(k=0) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.triu`\n",
      " |  \n",
      " |  triu_(...)\n",
      " |      triu_(k=0) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.triu`\n",
      " |  \n",
      " |  trunc(...)\n",
      " |      trunc() -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.trunc`\n",
      " |  \n",
      " |  trunc_(...)\n",
      " |      trunc_() -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.trunc`\n",
      " |  \n",
      " |  type(...)\n",
      " |      type(dtype=None, non_blocking=False, **kwargs) -> str or Tensor\n",
      " |      Returns the type if `dtype` is not provided, else casts this object to\n",
      " |      the specified type.\n",
      " |      \n",
      " |      If this is already of the correct type, no copy is performed and the\n",
      " |      original object is returned.\n",
      " |      \n",
      " |      Args:\n",
      " |          dtype (type or string): The desired type\n",
      " |          non_blocking (bool): If ``True``, and the source is in pinned memory\n",
      " |              and destination is on the GPU or vice versa, the copy is performed\n",
      " |              asynchronously with respect to the host. Otherwise, the argument\n",
      " |              has no effect.\n",
      " |          **kwargs: For compatibility, may contain the key ``async`` in place of\n",
      " |              the ``non_blocking`` argument. The ``async`` arg is deprecated.\n",
      " |  \n",
      " |  type_as(...)\n",
      " |      type_as(tensor) -> Tensor\n",
      " |      \n",
      " |      Returns this tensor cast to the type of the given tensor.\n",
      " |      \n",
      " |      This is a no-op if the tensor is already of the correct type. This is\n",
      " |      equivalent to ``self.type(tensor.type())``\n",
      " |      \n",
      " |      Args:\n",
      " |          tensor (Tensor): the tensor which has the desired type\n",
      " |  \n",
      " |  unbind(...)\n",
      " |      unbind(dim=0) -> seq\n",
      " |      \n",
      " |      See :func:`torch.unbind`\n",
      " |  \n",
      " |  unfold(...)\n",
      " |      unfold(dimension, size, step) -> Tensor\n",
      " |      \n",
      " |      Returns a tensor which contains all slices of size :attr:`size` from\n",
      " |      :attr:`self` tensor in the dimension :attr:`dimension`.\n",
      " |      \n",
      " |      Step between two slices is given by :attr:`step`.\n",
      " |      \n",
      " |      If `sizedim` is the size of dimension :attr:`dimension` for :attr:`self`, the size of\n",
      " |      dimension :attr:`dimension` in the returned tensor will be\n",
      " |      `(sizedim - size) / step + 1`.\n",
      " |      \n",
      " |      An additional dimension of size :attr:`size` is appended in the returned tensor.\n",
      " |      \n",
      " |      Args:\n",
      " |          dimension (int): dimension in which unfolding happens\n",
      " |          size (int): the size of each slice that is unfolded\n",
      " |          step (int): the step between each slice\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.arange(1., 8)\n",
      " |          >>> x\n",
      " |          tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.])\n",
      " |          >>> x.unfold(0, 2, 1)\n",
      " |          tensor([[ 1.,  2.],\n",
      " |                  [ 2.,  3.],\n",
      " |                  [ 3.,  4.],\n",
      " |                  [ 4.,  5.],\n",
      " |                  [ 5.,  6.],\n",
      " |                  [ 6.,  7.]])\n",
      " |          >>> x.unfold(0, 2, 2)\n",
      " |          tensor([[ 1.,  2.],\n",
      " |                  [ 3.,  4.],\n",
      " |                  [ 5.,  6.]])\n",
      " |  \n",
      " |  uniform_(...)\n",
      " |      uniform_(from=0, to=1) -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with numbers sampled from the continuous uniform\n",
      " |      distribution:\n",
      " |      \n",
      " |      .. math::\n",
      " |          P(x) = \\dfrac{1}{\\text{to} - \\text{from}}\n",
      " |  \n",
      " |  unsqueeze(...)\n",
      " |      unsqueeze(dim) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.unsqueeze`\n",
      " |  \n",
      " |  unsqueeze_(...)\n",
      " |      unsqueeze_(dim) -> Tensor\n",
      " |      \n",
      " |      In-place version of :meth:`~Tensor.unsqueeze`\n",
      " |  \n",
      " |  values(...)\n",
      " |      values() -> Tensor\n",
      " |      \n",
      " |      If :attr:`self` is a sparse COO tensor (i.e., with ``torch.sparse_coo`` layout),\n",
      " |      this returns a view of the contained values tensor. Otherwise, this throws an\n",
      " |      error.\n",
      " |      \n",
      " |      See also :meth:`Tensor.indices`.\n",
      " |      \n",
      " |      .. note::\n",
      " |        This method can only be called on a coalesced sparse tensor. See\n",
      " |        :meth:`Tensor.coalesce` for details.\n",
      " |  \n",
      " |  var(...)\n",
      " |      var(dim=None, unbiased=True, keepdim=False) -> Tensor\n",
      " |      \n",
      " |      See :func:`torch.var`\n",
      " |  \n",
      " |  view(...)\n",
      " |      view(*shape) -> Tensor\n",
      " |      \n",
      " |      Returns a new tensor with the same data as the :attr:`self` tensor but of a\n",
      " |      different :attr:`shape`.\n",
      " |      \n",
      " |      The returned tensor shares the same data and must have the same number\n",
      " |      of elements, but may have a different size. For a tensor to be viewed, the new\n",
      " |      view size must be compatible with its original size and stride, i.e., each new\n",
      " |      view dimension must either be a subspace of an original dimension, or only span\n",
      " |      across original dimensions :math:`d, d+1, \\dots, d+k` that satisfy the following\n",
      " |      contiguity-like condition that :math:`\\forall i = 0, \\dots, k-1`,\n",
      " |      \n",
      " |      .. math::\n",
      " |      \n",
      " |        \\text{stride}[i] = \\text{stride}[i+1] \\times \\text{size}[i+1]\n",
      " |      \n",
      " |      Otherwise, :meth:`contiguous` needs to be called before the tensor can be\n",
      " |      viewed. See also: :meth:`reshape`, which returns a view if the shapes are\n",
      " |      compatible, and copies (equivalent to calling :meth:`contiguous`) otherwise.\n",
      " |      \n",
      " |      Args:\n",
      " |          shape (torch.Size or int...): the desired size\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> x = torch.randn(4, 4)\n",
      " |          >>> x.size()\n",
      " |          torch.Size([4, 4])\n",
      " |          >>> y = x.view(16)\n",
      " |          >>> y.size()\n",
      " |          torch.Size([16])\n",
      " |          >>> z = x.view(-1, 8)  # the size -1 is inferred from other dimensions\n",
      " |          >>> z.size()\n",
      " |          torch.Size([2, 8])\n",
      " |      \n",
      " |          >>> a = torch.randn(1, 2, 3, 4)\n",
      " |          >>> a.size()\n",
      " |          torch.Size([1, 2, 3, 4])\n",
      " |          >>> b = a.transpose(1, 2)  # Swaps 2nd and 3rd dimension\n",
      " |          >>> b.size()\n",
      " |          torch.Size([1, 3, 2, 4])\n",
      " |          >>> c = a.view(1, 3, 2, 4)  # Does not change tensor layout in memory\n",
      " |          >>> c.size()\n",
      " |          torch.Size([1, 3, 2, 4])\n",
      " |          >>> torch.equal(b, c)\n",
      " |          False\n",
      " |  \n",
      " |  view_as(...)\n",
      " |      view_as(other) -> Tensor\n",
      " |      \n",
      " |      View this tensor as the same size as :attr:`other`.\n",
      " |      ``self.view_as(other)`` is equivalent to ``self.view(other.size())``.\n",
      " |      \n",
      " |      Please see :meth:`~Tensor.view` for more information about ``view``.\n",
      " |      \n",
      " |      Args:\n",
      " |          other (:class:`torch.Tensor`): The result tensor has the same size\n",
      " |              as :attr:`other`.\n",
      " |  \n",
      " |  where(...)\n",
      " |      where(condition, y) -> Tensor\n",
      " |      \n",
      " |      ``self.where(condition, y)`` is equivalent to ``torch.where(condition, self, y)``.\n",
      " |      See :func:`torch.where`\n",
      " |  \n",
      " |  zero_(...)\n",
      " |      zero_() -> Tensor\n",
      " |      \n",
      " |      Fills :attr:`self` tensor with zeros.\n",
      " |  \n",
      " |  ----------------------------------------------------------------------\n",
      " |  Data descriptors inherited from torch._C._TensorBase:\n",
      " |  \n",
      " |  T\n",
      " |      Is this Tensor with its dimensions reversed.\n",
      " |      \n",
      " |      If ``n`` is the number of dimensions in ``x``,\n",
      " |      ``x.T`` is equivalent to ``x.permute(n-1, n-2, ..., 0)``.\n",
      " |  \n",
      " |  data\n",
      " |  \n",
      " |  device\n",
      " |      Is the :class:`torch.device` where this Tensor is.\n",
      " |  \n",
      " |  dtype\n",
      " |  \n",
      " |  grad\n",
      " |      This attribute is ``None`` by default and becomes a Tensor the first time a call to\n",
      " |      :func:`backward` computes gradients for ``self``.\n",
      " |      The attribute will then contain the gradients computed and future calls to\n",
      " |      :func:`backward` will accumulate (add) gradients into it.\n",
      " |  \n",
      " |  grad_fn\n",
      " |  \n",
      " |  is_cuda\n",
      " |      Is ``True`` if the Tensor is stored on the GPU, ``False`` otherwise.\n",
      " |  \n",
      " |  is_leaf\n",
      " |      All Tensors that have :attr:`requires_grad` which is ``False`` will be leaf Tensors by convention.\n",
      " |      \n",
      " |      For Tensors that have :attr:`requires_grad` which is ``True``, they will be leaf Tensors if they were\n",
      " |      created by the user. This means that they are not the result of an operation and so\n",
      " |      :attr:`grad_fn` is None.\n",
      " |      \n",
      " |      Only leaf Tensors will have their :attr:`grad` populated during a call to :func:`backward`.\n",
      " |      To get :attr:`grad` populated for non-leaf Tensors, you can use :func:`retain_grad`.\n",
      " |      \n",
      " |      Example::\n",
      " |      \n",
      " |          >>> a = torch.rand(10, requires_grad=True)\n",
      " |          >>> a.is_leaf\n",
      " |          True\n",
      " |          >>> b = torch.rand(10, requires_grad=True).cuda()\n",
      " |          >>> b.is_leaf\n",
      " |          False\n",
      " |          # b was created by the operation that cast a cpu Tensor into a cuda Tensor\n",
      " |          >>> c = torch.rand(10, requires_grad=True) + 2\n",
      " |          >>> c.is_leaf\n",
      " |          False\n",
      " |          # c was created by the addition operation\n",
      " |          >>> d = torch.rand(10).cuda()\n",
      " |          >>> d.is_leaf\n",
      " |          True\n",
      " |          # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)\n",
      " |          >>> e = torch.rand(10).cuda().requires_grad_()\n",
      " |          >>> e.is_leaf\n",
      " |          True\n",
      " |          # e requires gradients and has no operations creating it\n",
      " |          >>> f = torch.rand(10, requires_grad=True, device=\"cuda\")\n",
      " |          >>> f.is_leaf\n",
      " |          True\n",
      " |          # f requires grad, has no operation creating it\n",
      " |  \n",
      " |  is_mkldnn\n",
      " |  \n",
      " |  is_quantized\n",
      " |  \n",
      " |  is_sparse\n",
      " |  \n",
      " |  layout\n",
      " |  \n",
      " |  name\n",
      " |  \n",
      " |  names\n",
      " |      Stores names for each of this tensor's dimensions.\n",
      " |      \n",
      " |      ``names[idx]`` corresponds to the name of tensor dimension ``idx``.\n",
      " |      Names are either a string if the dimension is named or ``None`` if the\n",
      " |      dimension is unnamed.\n",
      " |      \n",
      " |      Dimension names may contain characters or underscore. Furthermore, a dimension\n",
      " |      name must be a valid Python variable name (i.e., does not start with underscore).\n",
      " |      \n",
      " |      Tensors may not have two named dimensions with the same name.\n",
      " |      \n",
      " |      .. warning::\n",
      " |          The named tensor API is experimental and subject to change.\n",
      " |  \n",
      " |  ndim\n",
      " |      Alias for :meth:`~Tensor.dim()`\n",
      " |  \n",
      " |  output_nr\n",
      " |  \n",
      " |  requires_grad\n",
      " |      Is ``True`` if gradients need to be computed for this Tensor, ``False`` otherwise.\n",
      " |      \n",
      " |      .. note::\n",
      " |      \n",
      " |          The fact that gradients need to be computed for a Tensor do not mean that the :attr:`grad`\n",
      " |          attribute will be populated, see :attr:`is_leaf` for more details.\n",
      " |  \n",
      " |  shape\n",
      " |  \n",
      " |  volatile\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "help(t.LongTensor()) # 啊太长不看"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**其他常用选择函数**\n",
    " \n",
    " |函数|功能|\n",
    " |-----|----|\n",
    " |index_select(input,dim,index)|在指定dim上选取某些行和列|\n",
    " |masked_select(input,mask)|同a[a>0]，使用ByteTensor选取|\n",
    " |non_zero(input)|非零元素的下标|\n",
    " |gather(input,dim,index)|根据index，在dim维度上选取数据，输出的size与index一样|\n",
    "    \n",
    "**gather()的具体示例如下：**\n",
    "1. 取对角线元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0, 1, 2, 3]]) \n",
      "\n",
      "tensor([[ 0,  1,  2,  3],\n",
      "        [ 4,  5,  6,  7],\n",
      "        [ 8,  9, 10, 11],\n",
      "        [12, 13, 14, 15]]) \n",
      "\n",
      "tensor([[ 0,  5, 10, 15]])\n",
      "tensor([[0],\n",
      "        [1],\n",
      "        [2],\n",
      "        [3]]) \n",
      "\n",
      "tensor([[ 0],\n",
      "        [ 5],\n",
      "        [10],\n",
      "        [15]])\n"
     ]
    }
   ],
   "source": [
    "index=t.LongTensor([[0,1,2,3]])\n",
    "print(index,'\\n') #第一个维度的数为1\n",
    "a=t.arange(0,16).view(4,4)\n",
    "print(a,'\\n')\n",
    "print(a.gather(0,index))\n",
    "'''\n",
    "0表示对第一个维度操作，然后按index的顺序依次取\n",
    "即按行操作：第一行，第二行，第三行。。。，每一行按照index的顺序取\n",
    "'''\n",
    "index=t.LongTensor([[0,1,2,3]]).t()\n",
    "print(index,'\\n') #第二个维度的数为1 ，即4*1\n",
    "print(a.gather(1,index)) # 在第二个维度选取数据，依次取0号，1号，2,号。。。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. 取反对角线元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[12,  9,  6,  3]])\n",
      "tensor([[ 3],\n",
      "        [ 6],\n",
      "        [ 9],\n",
      "        [12]])\n"
     ]
    }
   ],
   "source": [
    "index=t.LongTensor([[3,2,1,0]])\n",
    "# print(index,'\\n') #第二个维度的数为1 ，即4*1\n",
    "print(a.gather(0,index))\n",
    "index=index.t()\n",
    "print(a.gather(1,index))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. 取两个对角线上元素"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0,  5, 10, 15],\n",
      "        [12,  9,  6,  3]])\n",
      "tensor([[ 0,  3],\n",
      "        [ 5,  6],\n",
      "        [10,  9],\n",
      "        [15, 12]])\n"
     ]
    }
   ],
   "source": [
    "index=t.LongTensor([[0,1,2,3],[3,2,1,0]])\n",
    "print(a.gather(0,index))\n",
    "index=index.t()\n",
    "print(a.gather(1,index))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "与gather相应的逆操作是scatter_，sactter_把取出来的数据再放回去，<front color=red>注意scatter_是inplace操作</front>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
