{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f9725027",
   "metadata": {},
   "source": [
    "# 2.1 数据操作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17586b60",
   "metadata": {},
   "source": [
    "2.1.1 入门\n",
    "    \n",
    "   问题1：为什么要引入数据操作\n",
    "   \n",
    "    · 在深度学习中，往往一开始的数据并不是计算机直接能识别的；放到生活中，若把一个物品的特征描述出来，使它存储到计算机中，最初的数据并  非是计算机能直接处理的，需要经过一个步骤————数据处理\n",
    "    · 由机器学习和神经网络的发展以及吞吐量和浮点运算能力的加持，当今计算机可以快速处理大量数据；但这并不表示可以随意的选择数据进行处理，如那些初始数据————’原始数据‘，对其进行加工处理后，方能让计算机更好的进行处理\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "2c3e5877",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入包\n",
    "import torch "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a2a65b6",
   "metadata": {},
   "source": [
    "   问题2：什么是张量，它和数值有什么关系\n",
    "\n",
    "    · 一天，小明和小红去超市买水果，在货架上看到苹果的价格是2元/个，香蕉的价格是1.5元/个；后来付钱的时候，一共花了 17.5元\n",
    "    · 从上述的故事中，可以看出价格是一个带着单位的’数值‘，正巧在数据领域，我们把’数值‘称之为———— ’张量‘\n",
    "\n",
    "[小提示] arange是计算机中的概念，tensor是数学中的概念，两者在深度学习中是一样的，所以不要过分纠结"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f00b5d5f",
   "metadata": {},
   "source": [
    "例1：创建张量数组"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a21ad175",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.arange(12)\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52f66c87",
   "metadata": {},
   "source": [
    "   问题3：张量的形状是什么，怎么计算\n",
    "   \n",
    "    · 一天，小明出门游玩，在一面墙上看到一个游泳池，这个泳池是一个长方形，泳池上画满了鸭子，小明发现较长的边上绘着6个鸭子，短一点的绘着4个鸭子\n",
    "    · 从上述的故事中，长方形是这个游泳池的’形状‘，对应数学中长方形可以和矩阵中联系起来，可以知道这个矩阵的形状是6×4的，所以可以认为它是由6个行向量组成，刚好，数学中向量可以看做是若干个标量组成；人们为了方便，通常将0维的张量称为标量，将1维的称为向量，将2维的称为矩阵，三维及以上一般称为n维张量\n",
    "    · 在计算机中，我们可以利用函数来计算张量的形状，如X.shape来计算X张量的形状"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de156ad9",
   "metadata": {},
   "source": [
    "例2： 计算张量X的形状"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "57232c76",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([12])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57dccd88",
   "metadata": {},
   "source": [
    "    ·张量中元素的总数，即形状的所有元素乘积，可以检查它的大小（size）。 因为这里在处理的是一个向量，所以它的 shape 与它的 size 相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86baabc7",
   "metadata": {},
   "source": [
    "例3：计算张量X的大小"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "434ccb69",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "12"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.numel()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adf1a7b4",
   "metadata": {},
   "source": [
    "    ·reshape可以重新定义矩阵的形状，但保持了原有矩阵的元素个数对应地址不变"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5d70233",
   "metadata": {},
   "source": [
    "例4：利用reshape（ ）改变张量X的形状"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "1facc879",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0,  1,  2,  3,  4,  5],\n",
       "        [ 6,  7,  8,  9, 10, 11]])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = x.reshape(2,6)\n",
    "X"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94495e48",
   "metadata": {},
   "source": [
    "    ·reshape 可以根据一个维度，计算出另一个维度； 下面两个例子，可以看到-1的维度是不存在的，但是与此同时宽度或者说高度已经给定，则会自动计算出剩下的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "cd5efd12",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0,  1,  2,  3],\n",
       "        [ 4,  5,  6,  7],\n",
       "        [ 8,  9, 10, 11]])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = x.reshape(-1,4)\n",
    "X"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1fb5bfbc",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0,  1,  2,  3],\n",
       "        [ 4,  5,  6,  7],\n",
       "        [ 8,  9, 10, 11]])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = x.reshape(3,-1)\n",
    "X"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07c00111",
   "metadata": {},
   "source": [
    "    · 给指定形状和元素值的矩阵初始化"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b33bd387",
   "metadata": {},
   "source": [
    "例5：利用torc.zeros()新建并初始化张量，注意：括号中的参数必须是元组"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "a9ffbde6",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[0., 0., 0., 0.],\n",
       "         [0., 0., 0., 0.]],\n",
       "\n",
       "        [[0., 0., 0., 0.],\n",
       "         [0., 0., 0., 0.]],\n",
       "\n",
       "        [[0., 0., 0., 0.],\n",
       "         [0., 0., 0., 0.]]])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.zeros((3,2,4))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46f8305a",
   "metadata": {},
   "source": [
    "例6：利用torc.ones()新建并初始化张量，注意：括号中的参数必须是元组"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "2fb6961f",
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[1., 1., 1., 1.],\n",
       "         [1., 1., 1., 1.]],\n",
       "\n",
       "        [[1., 1., 1., 1.],\n",
       "         [1., 1., 1., 1.]],\n",
       "\n",
       "        [[1., 1., 1., 1.],\n",
       "         [1., 1., 1., 1.]]])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "L = torch.ones((3,2,4))\n",
    "L"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12463ea3",
   "metadata": {},
   "source": [
    "    · 随机生成一个张量，每个元素从均值为0、标准差为1的标准高斯正态分布中随机采样"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d1e853f2",
   "metadata": {},
   "source": [
    "例7：利用torch.randn( )建立并随机初始化张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "0a7337cf",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-2.4424e+00, -5.6433e-01, -9.5162e-01, -1.3353e+00],\n",
       "        [-5.1267e-04,  1.3417e+00,  5.5197e-01, -7.2932e-01],\n",
       "        [ 1.5266e+00,  1.8851e+00,  6.5820e-01,  1.3485e-02]])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.randn(3,4)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3dff9a3",
   "metadata": {},
   "source": [
    " 书写小提示\n",
    "    · 嵌套列表\n",
    "    · 下面这样的输入方法，便于日后在代码修订方便"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "43c9ad5a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[2, 1, 4, 3],\n",
       "        [1, 2, 3, 4],\n",
       "        [4, 3, 2, 1]])"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.tensor([\n",
    "            [2, 1, 4, 3]\n",
    "            ,[1, 2, 3, 4]\n",
    "            ,[4, 3, 2, 1]\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47bfd415",
   "metadata": {},
   "source": [
    "2.1.2 运算"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a5744d0",
   "metadata": {},
   "source": [
    "    · 按元素操作5个元素的元组\n",
    "    · 若元素中有浮点型，则int型会自动转换为浮点型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8603c815",
   "metadata": {},
   "source": [
    "例8：张量的计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "cffcd8d3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([4., 5., 7., 8.]),\n",
       " tensor([-2., -1.,  1.,  2.]),\n",
       " tensor([ 3.,  6., 12., 15.]),\n",
       " tensor([0.3333, 0.6667, 1.3333, 1.6667]),\n",
       " tensor([  1.,   8.,  64., 125.]))"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.tensor([1.0, 2, 4, 5])\n",
    "y = torch.tensor([3, 3, 3, 3])\n",
    "x + y, x - y, x * y, x /y, x ** y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e55ee5e",
   "metadata": {},
   "source": [
    "例9：求幂运算，一元运算符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "141bd5e0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([  2.7183,   7.3891,  54.5982, 148.4132])"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.exp(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c24f89a6",
   "metadata": {},
   "source": [
    "例10：利用x.numel( )计算张量x的大小"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "6fdc5bc1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.numel()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f22ce1b",
   "metadata": {},
   "source": [
    "    · 连接多个张量，根据不同位置进行绑定，如沿行轴0，形状的第一个元素\n",
    "   问题1：为什么此处用浮点32位。 \n",
    "  \n",
    "    ·这是我们常用的标准，因为深度学习数据很大，所以不建议采用64位来学习使用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0191e788",
   "metadata": {},
   "source": [
    "   问题2：为啥有的时候用reshape()，有时候用reshape(()),运行结果也没看出区别\n",
    "    \n",
    "    · torch文档里：torch.reshape( input, shape) → Tensorinput的数据是Tensor类型.shape是期望新的shape.\n",
    "    · 个人理解：y = x.reshape(3, 4)是torch下的tensor实例对象x的方法，\n",
    "    · a = torch.arange(3).reshape((3, 1))里面用的是torch文档里的方法。\n",
    "    · 两个方法只是形式看起来不一样：对于对象x的实例方法,当在python类里每次定义方法时都需要绑定这个实例self,就是reshape(self, shape),因为实例方法的调用离不开实例,我们需要把实例自己传给函数,调用的时候是这样的x.reshape()，其实就相当于是reshape(x, shape) "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd265f28",
   "metadata": {},
   "source": [
    "例11：利用torch.cat(xxx,dim=0/1)连接2个张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "76c919e9",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.]]),\n",
       " tensor([[2., 1., 4., 3.],\n",
       "         [1., 2., 3., 4.],\n",
       "         [4., 3., 2., 1.]]),\n",
       " tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [ 2.,  1.,  4.,  3.],\n",
       "         [ 1.,  2.,  3.,  4.],\n",
       "         [ 4.,  3.,  2.,  1.]]))"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(12, dtype=torch.float32).reshape((3, 4))\n",
    "Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])\n",
    "X, Y, torch.cat((X, Y), dim=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "5d9735e7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.,  1.,  2.,  3.,  2.,  1.,  4.,  3.],\n",
       "        [ 4.,  5.,  6.,  7.,  1.,  2.,  3.,  4.],\n",
       "        [ 8.,  9., 10., 11.,  4.,  3.,  2.,  1.]])"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.cat((X, Y), dim=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f761c99d",
   "metadata": {},
   "source": [
    "例12：利用逻辑计算构建张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "4b44a091",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[False,  True, False,  True],\n",
       "        [False, False, False, False],\n",
       "        [False, False, False, False]])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X == Y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a95f7dad",
   "metadata": {},
   "source": [
    "例13：利用X.sum( )对张量X求和"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "690b3950",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(66.)"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X.sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2b152d5",
   "metadata": {},
   "source": [
    "2.1.3 广播机制"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c9f403e",
   "metadata": {},
   "source": [
    "   问题1:何为广播机制\n",
    "\n",
    "    · 在相同形状的两个张量的操作上，可以直接进行运算；但是对于特殊的张量，当形状不同时，利用'广播机制'可以将两者转化为相同形状的张量；如下方a张量进行列扩展，复制列的元素值；b张量进行行扩展，复制行的元素值\n",
    "\n",
    "   问题2:广播机制可以随便使用吗 \n",
    "    \n",
    "    · 不可以，在进行一些运算的时候，广播机制会使想要的运算结果发生偏差，如2.2中NA的值可以用均值计算出来代替，也可以删除，但直接广播就会产生错误"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66aab0bf",
   "metadata": {},
   "source": [
    "例14：利用广播 操作张量 a + b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "199820ee",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[0],\n",
       "         [1],\n",
       "         [2]]),\n",
       " tensor([[0, 1]]))"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = torch.arange(3).reshape(3, 1)\n",
    "b = torch.arange(2).reshape(1, 2)\n",
    "a, b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "4a65959b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0, 1],\n",
       "        [1, 2],\n",
       "        [2, 3]])"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a + b"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26fdb03b",
   "metadata": {},
   "source": [
    "2.1.4 索引和切片"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9dc8cd4",
   "metadata": {},
   "source": [
    "   问题1:索引如何使用\n",
    "   \n",
    "    ·可以从左开始，即0开始；可以从右开始，即-1；方法和标准python的数组使用一样。例子如下：把矩阵中的元素想像成一个个数组中的元素，-1的意思等于最后一个元素，即从最右端开始选取第一个；1:3的意思就是[1:3)这样的左闭右开的区间，同理-1:-5也是这样；"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28051d97",
   "metadata": {},
   "source": [
    "例15：张量切片"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "52129f58",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([ 8.,  9., 10., 11.]),\n",
       " tensor([[ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.]]))"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X[-1], X[1:3]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "396b4e1d",
   "metadata": {},
   "source": [
    "例16：按照上述的例子，不难发现列表中的元素是可以修改的，同理矩阵也可以修改；"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "8d67c40f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.,  1.,  2.,  3.],\n",
       "        [ 4.,  5.,  9.,  7.],\n",
       "        [ 8.,  9., 10., 11.]])"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X[1, 2] = 9\n",
    "X"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79603fed",
   "metadata": {},
   "source": [
    "例17： 不满足给单个元素赋值，也可以直接给一组或者一个区间的元素赋值，这里如果是2维试着用2个数组来理解行和列；若使用矩阵X，同时还有别的函数要调用矩阵X，建议重新建立一个，以免出现问题会影响修改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "ffa1365c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.,  1.,  2.,  3.],\n",
       "        [ 4.,  5.,  6.,  7.],\n",
       "        [ 8.,  9., 10., 11.]])"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(12, dtype=torch.float32).reshape((3, 4))\n",
    "X"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "3b40772f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 2.,  2.,  2.,  2.],\n",
       "        [ 2.,  2.,  2.,  2.],\n",
       "        [ 8.,  9., 10., 11.]])"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X[0:2, :] = 2\n",
    "X"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "594db813",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 2.,  3.,  3.,  2.],\n",
       "        [ 3.,  3.,  3.,  3.],\n",
       "        [ 8.,  9., 10., 11.]])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X[0,1:3] = 3\n",
    "X[1, :] = 3\n",
    "X"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "184733e4",
   "metadata": {},
   "source": [
    "2.1.5 节省内存"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e039005",
   "metadata": {},
   "source": [
    "    · 以减少不必要的内存使用，如python中某一个变量不再使用时，会自动释放其所占的内存；"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ef755de",
   "metadata": {},
   "source": [
    "例18：python中的id（ ）函数来演示"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "e68c9cfb",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "False"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "before = id(Y) # before 指向y的地址\n",
    "Y = Y + X # Y重新赋值时分配了新的内存地址\n",
    "id(Y) == before"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "530634d3",
   "metadata": {},
   "source": [
    "    上述操作不好的地方：\n",
    "    · 首先，我们不想总是不必要地分配内存。在机器学习中，我们可能有数百兆的参数，并且在一秒内多次更新所有参数。通常情况下，我们希望原地执行这些更新。\n",
    "    · 其次，我们可能通过多个变量指向相同参数。如果我们不原地更新，其他引用仍然会指向旧的内存位置，这样我们的某些代码可能会无意中引用旧的参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "8e7ca1a7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "id(Z): 140295141652192\n",
      "id(Z): 140295141652192\n"
     ]
    }
   ],
   "source": [
    "Z = torch.zeros_like(Y)\n",
    "print('id(Z):', id(Z))\n",
    "Z[:] = X + Y\n",
    "print('id(Z):', id(Z))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59d52837",
   "metadata": {},
   "source": [
    "    · 根据上述的例子，对应者python中的数据的切片复制，原理是相同的；在不改变内存分配的同时，利用切片将运算结果返还给需要改变的变量处"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8aeeec67",
   "metadata": {},
   "source": [
    "例19：切片复制新的张量，不改变内存分配"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "646db8ad",
   "metadata": {},
   "outputs": [],
   "source": [
    "X = torch.arange(12, dtype=torch.float32).reshape((3, 4))\n",
    "Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "fe82035a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.]]),\n",
       " tensor([[2., 1., 4., 3.],\n",
       "         [1., 2., 3., 4.],\n",
       "         [4., 3., 2., 1.]]))"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X, Y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "0e9c5e9b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "l = id(Y)\n",
    "Y[:] = X + Y\n",
    "id(Y) == l"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64cc40bd",
   "metadata": {},
   "source": [
    "例20：同样的，除了切片复制，还有X += Y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "db2d82f4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "before = id(X)\n",
    "X += Y\n",
    "id(X) == before"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7e4b6af",
   "metadata": {},
   "source": [
    "2.1.6 转换为其它python对象"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0f90447",
   "metadata": {},
   "source": [
    "    · 将矩阵转换为Numpy张量，反过来也十分容易。且转换后的结果不共享内存，可以单独操作，这样有利于在利用GPU或CPU操作时，Python中的Numpy也希望用相同的内存地址进行其它操作，这样就不会冲突，从而减少了等待时间。\n",
    "   \n",
    "   问题：上述的节省内存，是在什么情况下。\n",
    "    \n",
    "    .numpy和.from_numpy方法是可以共享内存的，但如果使用torch.tensor(ndarray)由numpy数组生成tensor，就不会共享内存\n",
    "    小tips，不要把多个输出写在不同行，默认执行最后一行，可以利用输出函数print"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af712ccb",
   "metadata": {},
   "source": [
    "例21：用type（X）计算张量类型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "ecbc4918",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(numpy.ndarray, torch.Tensor)"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = X.numpy()\n",
    "B = torch.tensor(A)\n",
    "type(A), type(B)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "52435cf2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(array([[ 2.,  3.,  8.,  9.],\n",
       "        [ 9., 12., 15., 18.],\n",
       "        [20., 21., 22., 23.]], dtype=float32),\n",
       " tensor([[ 2.,  3.,  8.,  9.],\n",
       "         [ 9., 12., 15., 18.],\n",
       "         [20., 21., 22., 23.]]))"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A, B"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17440f86",
   "metadata": {},
   "source": [
    "   问题1：如何改变张量的类型\n",
    "    \n",
    "    · 要将大小为1的张量转换为 Python 标量，\n",
    "    · 我们可以调用 item 函数或 Python 的内置函数item函数的作用类比，在python的输入函数input，若想输入的量维数值，则需要添加eval函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4431842",
   "metadata": {},
   "source": [
    "例22：利用x.item( )转换张量类型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "4e70e1d1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([3.5000]), 3.5, 3.5, 3)"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = torch.tensor([3.5])\n",
    "a, a.item(), float(a), int(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "2325cb30",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "str"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = input()\n",
    "type(a)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "f68f3862",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3.5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "float"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = eval(input())\n",
    "type(a)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f2c3705",
   "metadata": {},
   "source": [
    "2.1.7 小结\n",
    "\n",
    "    · 深度学习存储和操作数据的主要接口是张量（ 𝑛 维数组）。它提供了各种功能，包括基本数学运算、广播、索引、切片、内存节省和转换其他 Python 对象。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97726e57",
   "metadata": {},
   "source": [
    "2.1.8 练习\n",
    "\n",
    "    · 运行本节中的代码。将本节中的条件语句 X == Y 更改为 X < Y 或 X > Y，然后看看你可以得到什么样的张量。\n",
    "    · 用其他形状（例如三维张量）替换广播机制中按元素操作的两个张量。结果是否与预期相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ccc43d67",
   "metadata": {},
   "source": [
    "练习1：将本节中的条件语句 X == Y 更改为 X < Y 或 X > Y，然后看看你可以得到什么样的张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "63fbf806",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.]]),\n",
       " tensor([[2., 1., 4., 3.],\n",
       "         [1., 2., 3., 4.],\n",
       "         [4., 3., 2., 1.]]),\n",
       " tensor([[False, False, False, False],\n",
       "         [ True,  True,  True,  True],\n",
       "         [ True,  True,  True,  True]]),\n",
       " tensor([[ True, False,  True, False],\n",
       "         [False, False, False, False],\n",
       "         [False, False, False, False]]))"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(12, dtype=torch.float32).reshape((3, 4))\n",
    "Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])\n",
    "X, Y, X > Y, X < Y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd660d35",
   "metadata": {},
   "source": [
    "练习2：用其他形状（例如三维张量）替换广播机制中按元素操作的两个张量。结果是否与预期相同"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "4b74af59",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[ 0.,  1.,  2.],\n",
       "          [ 3.,  4.,  5.],\n",
       "          [ 6.,  7.,  8.],\n",
       "          [ 9., 10., 11.]],\n",
       " \n",
       "         [[12., 13., 14.],\n",
       "          [15., 16., 17.],\n",
       "          [18., 19., 20.],\n",
       "          [21., 22., 23.]]]),\n",
       " tensor([[[ 0.,  1.,  2.],\n",
       "          [ 3.,  4.,  5.]],\n",
       " \n",
       "         [[ 6.,  7.,  8.],\n",
       "          [ 9., 10., 11.]],\n",
       " \n",
       "         [[12., 13., 14.],\n",
       "          [15., 16., 17.]],\n",
       " \n",
       "         [[18., 19., 20.],\n",
       "          [21., 22., 23.]]]))"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = torch.arange(24.0).reshape((2,4,3))\n",
    "b = torch.arange(24.0).reshape((4,2,3))\n",
    "a, b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "2ead72a0",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 1",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-39-bd58363a63fc>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0ma\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0mb\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 1"
     ]
    }
   ],
   "source": [
    "a + b"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9837ccf4",
   "metadata": {},
   "source": [
    "    · The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 1\n",
    "    · 张量a(3)的大小必须与张量b(2)在非单点维数1上的大小相匹配"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "6bbb0d0e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[ 0.,  1.,  2.],\n",
       "          [ 3.,  4.,  5.],\n",
       "          [ 6.,  7.,  8.],\n",
       "          [ 9., 10., 11.]],\n",
       " \n",
       "         [[12., 13., 14.],\n",
       "          [15., 16., 17.],\n",
       "          [18., 19., 20.],\n",
       "          [21., 22., 23.]]]),\n",
       " tensor([[[ 0.,  1.,  2.],\n",
       "          [ 3.,  4.,  5.],\n",
       "          [ 6.,  7.,  8.],\n",
       "          [ 9., 10., 11.]],\n",
       " \n",
       "         [[12., 13., 14.],\n",
       "          [15., 16., 17.],\n",
       "          [18., 19., 20.],\n",
       "          [21., 22., 23.]]]))"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = torch.arange(24.0).reshape((2,4,3))\n",
    "b = torch.arange(24.0).reshape((2,4,3))\n",
    "a, b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "b3951366",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0.,  2.,  4.],\n",
       "         [ 6.,  8., 10.],\n",
       "         [12., 14., 16.],\n",
       "         [18., 20., 22.]],\n",
       "\n",
       "        [[24., 26., 28.],\n",
       "         [30., 32., 34.],\n",
       "         [36., 38., 40.],\n",
       "         [42., 44., 46.]]])"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a + b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "27d34759",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[True, True, True],\n",
       "         [True, True, True],\n",
       "         [True, True, True],\n",
       "         [True, True, True]],\n",
       "\n",
       "        [[True, True, True],\n",
       "         [True, True, True],\n",
       "         [True, True, True],\n",
       "         [True, True, True]]])"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a == b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8790f68e",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "pytorch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
