{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a2e803ef",
   "metadata": {},
   "source": [
    "# 2.3 线性代数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b780015",
   "metadata": {},
   "source": [
    "2.3.1 标量\n",
    "\n",
    "   问题1：何为标量\n",
    "    \n",
    "    · 标量无处不在，大道商场里各类商品的价格，小到菜市场找零用的钱的数值\n",
    "    · 数学中，常用的标量有很多，比如在 y = 2x + 1这个斜直线中，若给定x=1，则知道y=3，这里的1，3，2等数值均是标量；x，y则为一种待定的标量，也称为未知量\n",
    "    \n",
    "   问题2：何为标量空间\n",
    "   \n",
    "    · 我们可以将小写的x，y，z等字母用来表示‘标量变量’；用ℝ （双写R）表示所有‘连续’实数标量的空间"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c365e40",
   "metadata": {},
   "source": [
    "例1：标量和未知标量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "a9519cbb",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([6.3200]), tensor([1.6600]), tensor([9.2967]), tensor([1.7124]))"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "a = torch.tensor([3.99])\n",
    "b = torch.tensor([2.33])\n",
    "\n",
    "a + b, a - b, a * b, a / b"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7dabc21f",
   "metadata": {},
   "source": [
    "    · 从上面的例子不难看出，标量可以通过赋值的方式定下来，也可以进行四则运算"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a15f7a7b",
   "metadata": {},
   "source": [
    "2.3.2 向量\n",
    "\n",
    "   问题1：何为向量\n",
    "   \n",
    "    · 前面我们知道了，普通的一个实数值可以是一个标量；由若干个标量排列组成的列表我们称之为向量\n",
    "    · 标量可以是随手写的一个数字，不一定具有实际意义；但就向量而言，当它表示某组数据的编号时，我们可以通过编号，来找到对应事物的特征，从而进行分析和修改\n",
    "   \n",
    "   问题2：向量如何书写\n",
    "   \n",
    "    · 在数学表示法中，我们常将向量用粗体、小写的符号：𝐱 、 𝐲 和 𝐳 ；对比下普通小写：x，y，z（无加粗）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "894b54d6",
   "metadata": {},
   "source": [
    "例2：向量创建和输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "edd92e5f",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0, 1, 2, 3, 4, 5])"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.arange(6)\n",
    "x"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acb1ba13",
   "metadata": {},
   "source": [
    "   问题3：如何提取向量中的某一个标量\n",
    "   \n",
    "    · 在前面的定义中，向量是有标量的列表组成的，所以参照python中的对内部元素的引用，可以知道利用下标引用    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac73a0f9",
   "metadata": {},
   "source": [
    "例3：下标法调用向量中的值；tips: 想想还有没有别的方法~"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "05a1806e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(4)"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x[4]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d59e4fd",
   "metadata": {},
   "source": [
    "2.3.2.1 长度、维度和形状\n",
    "\n",
    "   问题1：何为长度，如何计算\n",
    "   \n",
    "    · 在数组或者列表中，其长度等价于内部元素的个数，2.1中已经说明\n",
    "   \n",
    "   问题2：何为维度，如何计算\n",
    "    \n",
    "    · 在0-D张量，也就是0维的标量时，我们将一个数字的维度定义为0，在向量中，我们将其维度定义为1-D，即维度为1，同理，矩阵的维度就是2—D，即维度为2\n",
    "    [tips]：张量对应的维度是对某一特定的轴而言\n",
    "   \n",
    "   问题3：形状如何判断\n",
    "    \n",
    "    · 可以利用shape函数，对于只有一个轴的向量，形状就只有1个元素;对于二维，会对应列出对应每个轴的长度\n",
    "    · 在python中元组的值是不支持直接修改的"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71cdd223",
   "metadata": {},
   "source": [
    "例4：长度计算len（x）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "4c215d2c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "6"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "bcd2d084",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = torch.arange(32).reshape((4,8))\n",
    "len(a)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ad297f7",
   "metadata": {},
   "source": [
    "例5：形状计算X.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "647dafca",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([6]), torch.Size([4, 8]))"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x.shape, a.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6466840c",
   "metadata": {},
   "source": [
    "2.3.3 矩阵\n",
    "\n",
    "   问题1：何为矩阵\n",
    "   \n",
    "    · 在前面我们得知了向量是由标量从0~1维度的推广；矩阵正是由向量推广而来的，从1-n维，常见的有2维矩阵\n",
    "    \n",
    "   问题2：如何表示\n",
    "   \n",
    "    · 我们常用大写字母X、Y、Z等来表示矩阵，在代码中表示对应轴的个数的张量；\n",
    "    · 在数学表示法中，我们使用 𝐀∈ℝ𝑚×𝑛 来表示矩阵 𝐀 ，其由 𝑚 行和 𝑛 列的实值标量组成。直观地，我们可以将任意矩阵 𝐀∈ℝ𝑚×𝑛 视为一个表格，其中每个元素 𝑎𝑖𝑗 属于第 𝑖 行第 𝑗 列\n",
    "    ·当矩阵的行列相同时，称为方矩阵，简称方阵"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c9dbc60",
   "metadata": {},
   "source": [
    "例6：rehape函数指定矩阵的形状"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "47be0cfd",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0,  1],\n",
       "         [ 2,  3],\n",
       "         [ 4,  5],\n",
       "         [ 6,  7]],\n",
       "\n",
       "        [[ 8,  9],\n",
       "         [10, 11],\n",
       "         [12, 13],\n",
       "         [14, 15]],\n",
       "\n",
       "        [[16, 17],\n",
       "         [18, 19],\n",
       "         [20, 21],\n",
       "         [22, 23]]])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(24).reshape(3,4,2) # 3-D，3维张量\n",
    "A"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "138609cd",
   "metadata": {},
   "source": [
    "   问题3：如何调用矩阵的元素\n",
    "   \n",
    "    · 利用下标法，方法同向量；这里要注意使用矩阵 𝐀 的小写字母索引下标 𝑎𝑖𝑗 来引用 [𝐀]𝑖𝑗 。为了表示简单，在必要时将逗号插入到单独的索引中，例如 𝑎2,3𝑗 和 [𝐀]2𝑖−1,3 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "706bdfb8",
   "metadata": {},
   "source": [
    "例7：矩阵的转置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "1c884b1b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0,  8, 16],\n",
       "         [ 2, 10, 18],\n",
       "         [ 4, 12, 20],\n",
       "         [ 6, 14, 22]],\n",
       "\n",
       "        [[ 1,  9, 17],\n",
       "         [ 3, 11, 19],\n",
       "         [ 5, 13, 21],\n",
       "         [ 7, 15, 23]]])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A.T # 三维这里比较难理解，对应下方二维来理解"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "1a2ff9c4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0,  1,  2],\n",
       "         [ 3,  4,  5],\n",
       "         [ 6,  7,  8],\n",
       "         [ 9, 10, 11]]),\n",
       " tensor([[ 0,  3,  6,  9],\n",
       "         [ 1,  4,  7, 10],\n",
       "         [ 2,  5,  8, 11]]))"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "B = torch.arange(12).reshape(4,3)\n",
    "B, B.T"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ceb136f0",
   "metadata": {},
   "source": [
    "例8：特殊方阵--对称矩阵的转置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "f5351094",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[1, 2, 3],\n",
       "         [2, 0, 4],\n",
       "         [3, 4, 5]]),\n",
       " tensor([[1, 2, 3],\n",
       "         [2, 0, 4],\n",
       "         [3, 4, 5]]))"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "C = torch.tensor([[1, 2, 3], [2, 0, 4], [3, 4, 5]])\n",
    "C, C.T"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "5cc51140",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[2, 3, 3],\n",
       "         [3, 0, 2],\n",
       "         [3, 2, 2]]),\n",
       " tensor([[2, 3, 3],\n",
       "         [3, 0, 2],\n",
       "         [3, 2, 2]]))"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "L = torch.tensor([[2, 3, 3], [3, 0, 2], [3, 2, 2]])\n",
    "L, L.T"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "bd807ba6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[True, True, True],\n",
       "        [True, True, True],\n",
       "        [True, True, True]])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "C == C.T # 只有形状相同的矩阵和转置才能来比较~"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75acc5a0",
   "metadata": {},
   "source": [
    "   问题4：矩阵有什么作用\n",
    "    \n",
    "    · 矩阵可以用来表示一个事物的特征和参数，例如在房屋样本数据中，某一行可以看出房屋编号、房间数量、房屋位置和价格云云，\n",
    "    · 虽然向量默认方向是列向量，但在数据样本中行向量更为常见"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e47ec9de",
   "metadata": {},
   "source": [
    "2.3.4 张量\n",
    "\n",
    "   问题1：张量的作用\n",
    "   \n",
    "    · 例如在图像处理时，图像通常以n维数组形式出现，其中3个轴分别对应高度、宽度及通道轴，用于堆叠颜色通道（红、绿、蓝）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bae04b6",
   "metadata": {},
   "source": [
    "例9：三维张量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "058ded0c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0,  1,  2,  3],\n",
       "         [ 4,  5,  6,  7],\n",
       "         [ 8,  9, 10, 11]],\n",
       "\n",
       "        [[12, 13, 14, 15],\n",
       "         [16, 17, 18, 19],\n",
       "         [20, 21, 22, 23]]])"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(24).reshape(2, 3, 4)\n",
    "X"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6df45547",
   "metadata": {},
   "source": [
    "2.3.5 张量算法的基本性质\n",
    "\n",
    "   问题1：张量算法的优势\n",
    "   \n",
    "    · 例如标量、向量、矩阵和任意数量轴的张量；在按照元素运算时，一元运算不会改变形状；若给定形状相同的任意2个张量，任何按元素的二元运算的结果都是相同形状的张量"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1bcd610",
   "metadata": {},
   "source": [
    "例10：矩阵加法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "b10be5c3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [12., 13., 14., 15.],\n",
       "         [16., 17., 18., 19.]]),\n",
       " tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [12., 13., 14., 15.],\n",
       "         [16., 17., 18., 19.]]),\n",
       " tensor([[ 0.,  2.,  4.,  6.],\n",
       "         [ 8., 10., 12., 14.],\n",
       "         [16., 18., 20., 22.],\n",
       "         [24., 26., 28., 30.],\n",
       "         [32., 34., 36., 38.]]))"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(20, dtype=torch.float32).reshape(5, 4)\n",
    "B = A.clone() # 通过分配新内存，将A的一个副本分配给B\n",
    "A, B, A + B"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b59cad5a",
   "metadata": {},
   "source": [
    "例11：矩阵元素乘法--哈达玛积（Hadamard product）（数学符号⊙）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "c16a4eab",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [12., 13., 14., 15.],\n",
       "         [16., 17., 18., 19.]]),\n",
       " tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [12., 13., 14., 15.],\n",
       "         [16., 17., 18., 19.]]),\n",
       " tensor([[  0.,   1.,   4.,   9.],\n",
       "         [ 16.,  25.,  36.,  49.],\n",
       "         [ 64.,  81., 100., 121.],\n",
       "         [144., 169., 196., 225.],\n",
       "         [256., 289., 324., 361.]]))"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A, B, A * B"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ae7c4a3",
   "metadata": {},
   "source": [
    "例12：张量和标量的运算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "3228dfd4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[ 0,  1,  2,  3],\n",
       "          [ 4,  5,  6,  7],\n",
       "          [ 8,  9, 10, 11]],\n",
       " \n",
       "         [[12, 13, 14, 15],\n",
       "          [16, 17, 18, 19],\n",
       "          [20, 21, 22, 23]]]),\n",
       " tensor([[[ 3,  4,  5,  6],\n",
       "          [ 7,  8,  9, 10],\n",
       "          [11, 12, 13, 14]],\n",
       " \n",
       "         [[15, 16, 17, 18],\n",
       "          [19, 20, 21, 22],\n",
       "          [23, 24, 25, 26]]]),\n",
       " torch.Size([2, 3, 4]))"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = 3\n",
    "X = torch.arange(24).reshape(2, 3, 4)\n",
    "X, a + X, (a * X).shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "388325e0",
   "metadata": {},
   "source": [
    "2.3.6 降维\n",
    "\n",
    "   问题1：什么是降维\n",
    "    \n",
    "    · 在一维张量，也就是我们常说的向量中，对全部元素进行求和后等到一个数值，这样的过程我们称之为降维\n",
    "    · 类似的将一个2维张量通过函数进行求和后，可以是保留某一个轴，或者全部求和，都达到了降维的效果"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e5e7d53",
   "metadata": {},
   "source": [
    "例13：求和, 对应数学中： ∑𝑑 𝑖=1 𝑥𝑖"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "f5567d74",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([0., 1., 2., 3.]), tensor(6.))"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.arange(4, dtype=torch.float32)\n",
    "x, x.sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c71240ed",
   "metadata": {},
   "source": [
    "例14：求任意形状张量的元素和, 如：矩阵 𝐀 中元素的和可以记为 ∑𝑚𝑖=1∑𝑛𝑗=1𝑎𝑖𝑗 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "6a5e127f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([5, 4]), tensor(190.))"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(20, dtype=torch.float32).reshape(5, 4)\n",
    "A.shape, A.sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ef3bf92",
   "metadata": {},
   "source": [
    "    · tips：在默认情况下，求和函数会沿着所有的轴降低张量的维度，使它变成一个标量"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "417e3662",
   "metadata": {},
   "source": [
    "例15：沿着轴0降维，通常是行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "cf031b7f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([40., 45., 50., 55.]), torch.Size([4]))"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A_sum_axis0 = A.sum(axis=0)\n",
    "A_sum_axis0, A_sum_axis0.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c4c85e2",
   "metadata": {},
   "source": [
    "例16：沿着轴1降维，通常是列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "3b211b37",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([ 6., 22., 38., 54., 70.]), torch.Size([5]))"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A_sum_axis1 = A.sum(axis=1)\n",
    "A_sum_axis1, A_sum_axis1.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "515a86ca",
   "metadata": {},
   "source": [
    "    · 此时输入的轴1的维数在输出形状中消失"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adee2183",
   "metadata": {},
   "source": [
    "例17：沿着行和列对矩阵求和 equal to 对矩阵的所有元素进行求和"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "3c744aec",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(190.), tensor(True))"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(20, dtype=torch.float32).reshape(5, 4)\n",
    "L = A.sum(axis=[0,1])\n",
    "L, L == A.sum() # 不同方法的共同结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "119753c3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Tensor, torch.Tensor)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "K = A.sum()\n",
    "type(L), type(K)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f80b32e",
   "metadata": {},
   "source": [
    "例18：利用2种方法求平均值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "df606b4b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(9.5000), tensor(9.5000))"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A.mean(), A.sum() / A.numel()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31788ada",
   "metadata": {},
   "source": [
    "    · RuntimeError: Can only calculate the mean of floating types. Got Long instead.\n",
    "    · bug:利用mean 按照某轴求和时，该张量矩阵必须是浮点类型的"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "966b40ab",
   "metadata": {},
   "source": [
    "例19：沿指定轴降低张量的维度计算平均值的函数及普通方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "b33fa320",
   "metadata": {},
   "outputs": [
    {
     "ename": "TypeError",
     "evalue": "numel() takes no keyword arguments",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-24-abe6d7127367>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mA\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmean\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mA\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m/\u001b[0m \u001b[0mA\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnumel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# 错误示范\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m: numel() takes no keyword arguments"
     ]
    }
   ],
   "source": [
    "A.mean(axis=0), A.sum(axis=0) / A.numel(axis=0) # 错误示范"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "3fb497c9",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([ 8.,  9., 10., 11.]), tensor([ 8.,  9., 10., 11.]))"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A.mean(axis=0), A.sum(axis=0) / A.shape[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2151f773",
   "metadata": {},
   "source": [
    "2.3.6.1 非降维求和\n",
    "\n",
    "   问题1：为什么要采用非降维求和\n",
    "    \n",
    "    · 有时可以保留某一行或某一列，进行矩阵的其它操作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e5fc4b2",
   "metadata": {},
   "source": [
    "例20：非降维求和的应用例子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "0a615a2f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 6.],\n",
       "        [22.],\n",
       "        [38.],\n",
       "        [54.],\n",
       "        [70.]])"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sum_A = A.sum(axis=1, keepdims=True)\n",
    "sum_A"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "1aac317b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0000, 0.1667, 0.3333, 0.5000],\n",
       "        [0.1818, 0.2273, 0.2727, 0.3182],\n",
       "        [0.2105, 0.2368, 0.2632, 0.2895],\n",
       "        [0.2222, 0.2407, 0.2593, 0.2778],\n",
       "        [0.2286, 0.2429, 0.2571, 0.2714]])"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A / sum_A # 广播实现非降维后的，额外应用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "e9708d3c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  6.,  8., 10.],\n",
       "         [12., 15., 18., 21.],\n",
       "         [24., 28., 32., 36.],\n",
       "         [40., 45., 50., 55.]]),\n",
       " tensor([[ 0.,  1.,  2.,  3.],\n",
       "         [ 4.,  5.,  6.,  7.],\n",
       "         [ 8.,  9., 10., 11.],\n",
       "         [12., 13., 14., 15.],\n",
       "         [16., 17., 18., 19.]]))"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A.cumsum(axis=0), A # 此方法诠释了，沿着某轴进行累计求和，但是不会降维"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "113932d2",
   "metadata": {},
   "source": [
    "2.3.7 点积（Dot Product）\n",
    "    \n",
    "   问题1：什么是点积，形式是怎么样的\n",
    "   \n",
    "    · 给出2个向量，元素个数相同，则对应位置元素相乘后累加的结果，就是2者的点积\n",
    "    · 两个向量 𝐱,𝐲∈ℝ𝑑，它们的点积（dot product）𝐱⊤𝐲（或  ⟨𝐱,𝐲⟩ ）是按相同位置元素乘积的和：𝐱⊤𝐲=∑𝑑𝑖=1𝑥𝑖𝑦𝑖 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0caa020",
   "metadata": {},
   "source": [
    "例21：点积的2种方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "9ce65b85",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([1., 1., 1., 1.]), tensor([0., 1., 2., 3.]), tensor(6.))"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x = torch.ones(4, dtype=torch.float32)\n",
    "y = torch.tensor([0.0, 1, 2, 3])\n",
    "x, y, torch.dot(x, y) # 1、公式法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "4b29891c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(6.)"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.sum(x * y) # 2、元素对应相乘求和法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95db1e10",
   "metadata": {},
   "source": [
    "    ·[知识补充] 例如：给定一组由向量 𝐱∈ℝ𝑑 表示的值，和一组由 𝐰∈ℝ𝑑 表示的权重。𝐱 中的值根据权重 𝐰 的加权和可以表示为点积 𝐱⊤𝐰。当权重为非负数且和为 1（即(∑𝑑 𝑖=1𝑤𝑖=1) ）时，点积表示 加权平均（weighted average）。\n",
    "    · 将两个向量归一化得到单位长度后，点积表示它们夹角的余弦。我们将在本节的后面正式介绍长度（length）的概念"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a8c023d",
   "metadata": {},
   "source": [
    "2.3.8 矩阵-向量积\n",
    "\n",
    "   问题1：什么是矩阵-向量积\n",
    "   \n",
    "    · 我们已经知道了点积的计算方法；我们可以先把一个4 × 4的矩阵A理解为4个行向量，对每个行向量，然后进行转置；就可以把矩阵A先理解为一个按照列方向写出的向量，当中每个元素从上到下，对应原矩阵中第一行到第四行"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3354828a",
   "metadata": {},
   "source": [
    "例22：利用np.dot(A, x)或者torch.mv(A, x)函数进行矩阵-向量积的计算，依据喜好选择即可"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "144ae7da",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([5, 4]),\n",
       " torch.Size([4]),\n",
       " tensor([ 6., 22., 38., 54., 70.]),\n",
       " array([ 6., 22., 38., 54., 70.], dtype=float32))"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "A.shape, x.shape, torch.mv(A,x), np.dot(A, x)\n",
    "# 请注意，此时想成功运行，前提是矩阵A的列数需和x的长度equal"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9bd52d5b",
   "metadata": {},
   "source": [
    "2.3.9 矩阵-矩阵乘法\n",
    "    \n",
    "   问题1：如何进行矩阵的乘法\n",
    "   \n",
    "    · 在线性代数中，只有A矩阵的列==B矩阵的行时才可以进行\n",
    "    · 在计算机中，利用将A矩阵的行进行\n",
    "    · 用行向量 𝐚⊤𝑖∈ℝ𝑘  表示矩阵 𝐀 的第  𝑖  行，并让列向量 𝐛𝑗∈ℝ𝑘作为矩阵 𝐁 的第  𝑗  列。要生成矩阵积  𝐂=𝐀𝐁\n",
    "    · 简单地将每个元素 𝑐𝑖𝑗 计算为点积 𝐚⊤𝑖𝐛𝑗"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c91cf383",
   "metadata": {},
   "source": [
    "例23：给出2个矩阵，进行相乘"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "373a176b",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "expected scalar type Long but found Float",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-32-e016b6c4c872>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0mA\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m15\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      2\u001b[0m \u001b[0mB\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mones\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# 修改方案，可以把15--》15.0 或者 利用下面的例子，转换为float32\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmm\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mA\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mB\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m: expected scalar type Long but found Float"
     ]
    }
   ],
   "source": [
    "A = torch.arange(15).reshape(5,3)\n",
    "B = torch.ones(3,4) # 修改方案，可以把15--》15.0 或者 利用下面的例子，转换为float32\n",
    "torch.mm(A, B)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "b1de55f9",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 6.,  6.,  6.],\n",
       "        [22., 22., 22.],\n",
       "        [38., 38., 38.],\n",
       "        [54., 54., 54.],\n",
       "        [70., 70., 70.]])"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(20, dtype=torch.float32).reshape(5, 4)\n",
    "B = torch.ones(4,3)\n",
    "torch.mm(A, B) # 矩阵乘法不要和哈达玛积混淆"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d890b26",
   "metadata": {},
   "source": [
    "   问题1：矩阵乘法和哈达玛积的区别\n",
    "   \n",
    "    · 哈达玛积是对应位置元素的乘积且放到固定位置\n",
    "    · 矩阵乘法是按某列或者行对×另一个矩阵的某行或某列的乘积和放到下标为前2者组合的行列位置处"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc6a42b6",
   "metadata": {},
   "source": [
    "2.3.10 范数\n",
    "\n",
    "   问题1：什么是范数，有哪些有点\n",
    "   \n",
    "    · 在线性代数中，向量范数是将向量映射到标量的函数 f\n",
    "    · 范数是更为强化的距离定义，在普通距离定义上增加了一条数乘的运算法则\n",
    "    · 首先要肯定范数在线代中是最常用的运算符之一\n",
    "    · 范数还可以提供向量不涉及维度时分量的大小（标量的大小）\n",
    "   \n",
    "   问题2：范数需要满足哪些属性\n",
    "    \n",
    "    · 给定任意的向量𝐱\n",
    "    · 其一，如果我们按常数因子𝛼缩放向量的所有元素，则范数也会按相同常数因子的绝对值进行缩放，式子：𝑓(𝛼𝐱)=|𝛼|𝑓(𝐱)\n",
    "    · 其二，三角不等式：𝑓(𝐱+𝐲)≤𝑓(𝐱)+𝑓(𝐲)\n",
    "    · 其三，范数必须是非负的：𝑓(𝐱)≥0\n",
    "    · 其四，当且仅当向量中元素全为0时，范数最小为0：∀𝑖,[𝐱]𝑖=0⇔𝑓(𝐱)=0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7070d2ff",
   "metadata": {},
   "source": [
    "例24：欧几里得距离是一个L2范数，计算如下（‖𝐱‖2 = ‖𝐱‖）；在 𝐿2 范数中常常省略下标 2 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "241f896f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(5.)"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "u = torch.tensor([3.0, -4.0])\n",
    "torch.norm(u)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "554d83a6",
   "metadata": {},
   "source": [
    "    · tips：在深度学习中，更为喜欢使用L2范数的平方\n",
    "    \n",
    "   问题2：什么是L1范数\n",
    "   \n",
    "    · L1范数，在一个向量中，是对其内部所有元素取绝对值后，再进行累加等到的和"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "902ee431",
   "metadata": {},
   "source": [
    "例24：求u向量的L1范数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "a4bd4b7f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(7.)"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.abs(u).sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4756f869",
   "metadata": {},
   "source": [
    "   问题3：Lp范数和Lf范数是什么\n",
    "   \n",
    "    · LP范数不是一个范数，而是一组范数，L1和L2范数是其更一般的特例\n",
    "    · Lf范数又称为弗罗贝尼丝范数，是矩阵元素平方和的平方根"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ed76a71",
   "metadata": {},
   "source": [
    "例25：举例计算Lf范数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "26863163",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(6.)"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.norm(torch.ones((4, 9))) #注意 ones内的参数得是元组"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "449535d8",
   "metadata": {},
   "source": [
    "2.3.10.1 范数和目标\n",
    "      \n",
    "    · 虽然我们不想走得太远，但我们可以对这些概念为什么有用有一些直觉。在深度学习中，我们经常试图解决优化问题： 最大化 分配给观测数据的概率; 最小化 预测和真实观测之间的距离。 用向量表示物品(如单词、产品或新闻文章)，以便最小化相似项目之间的距离，最大化不同项目之间的距离。 通常，目标，或许是深度学习算法最重要的组成部分(除了数据)，被表达为范数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2491e37f",
   "metadata": {},
   "source": [
    "2.3.11. 关于线性代数的更多信息\n",
    "\n",
    "    · 作用，这一节学习的线性代数可以帮助理解大量的现代神学习的知识\n",
    "    · 除此外，比如矩阵可以分级为因子。可以降低模型维度，优化运算\n",
    "    · 机器学习的整个子领域都侧重于使用矩阵分解及其向高阶张量的泛化来发现数据集中的结构并解决预测问题"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b46398c",
   "metadata": {},
   "source": [
    "2.3.12 小结\n",
    "\n",
    "    · 标量、向量、矩阵和张量是线性代数中的基本数学对象。\n",
    "    · 向量泛化自标量，矩阵泛化自向量。\n",
    "    · 标量、向量、矩阵和张量分别具有零、一、二和任意数量的轴。\n",
    "    · 一个张量可以通过sum 和 mean沿指定的轴降低维度。\n",
    "    · 两个矩阵的按元素乘法被称为他们的哈达玛积。它与矩阵乘法不同。\n",
    "    · 在深度学习中，我们经常使用范数，如  𝐿1 范数、 𝐿2 范数和弗罗贝尼乌斯范数。\n",
    "    · 我们可以对标量、向量、矩阵和张量执行各种操作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2c5cc60",
   "metadata": {},
   "source": [
    "2.3.13. 练习\n",
    "\n",
    "    · 证明一个矩阵  𝐀  的转置的转置是  𝐀 ： (𝐀⊤)⊤=𝐀 。\n",
    "    · 给出两个矩阵  𝐀  和  𝐁 , 显示转置的和等于和的转置： 𝐀⊤+𝐁⊤=(𝐀+𝐁)⊤ .\n",
    "    · 给定任意方矩阵 𝐀 ，  𝐀+𝐀⊤ 总是对称的吗?为什么?\n",
    "    · 我们在本节中定义了形状（2, 3, 4）的张量 X。len(X)的输出结果是什么？\n",
    "    · 对于任意形状的张量X, len(X)是否总是对应于X特定轴的长度?这个轴是什么?\n",
    "    · 运行 A / A.sum(axis=1)，看看会发生什么。你能分析原因吗？\n",
    "    · 当你在曼哈顿的两点之间旅行时，你需要在坐标上走多远，也就是说，就大街和街道而言？你能斜着走吗？\n",
    "    · 考虑一个具有形状（2, 3, 4）的张量，在轴 0,1,2 上的求和输出是什么形状?\n",
    "    · 向 linalg.norm 函数提供 3 个或更多轴的张量，并观察其输出。对于任意形状的张量这个函数计算得到什么?"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30e97bf1",
   "metadata": {},
   "source": [
    "练习1：证明一个矩阵 𝐀 的转置的转置是𝐀：(𝐀⊤)⊤=𝐀"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "d4b1b475",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0,  1,  2],\n",
       "         [ 3,  4,  5],\n",
       "         [ 6,  7,  8],\n",
       "         [ 9, 10, 11]]),\n",
       " tensor([[ 0,  1,  2],\n",
       "         [ 3,  4,  5],\n",
       "         [ 6,  7,  8],\n",
       "         [ 9, 10, 11]]))"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "A = torch.arange(12).reshape(4,3)\n",
    "L = A.T\n",
    "A, L.T "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "cf72bf98",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0,  1,  2,  3],\n",
       "         [ 4,  5,  6,  7],\n",
       "         [ 8,  9, 10, 11]]),\n",
       " tensor([[ 0,  1,  2,  3],\n",
       "         [ 4,  5,  6,  7],\n",
       "         [ 8,  9, 10, 11]]))"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "B = torch.arange(12).reshape(3,4)\n",
    "k = B.T\n",
    "k.T, B"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ab5adc7",
   "metadata": {},
   "source": [
    "练习2：任给两个矩阵 𝐀 和 𝐁 , 显示转置的和等于和的转置： 𝐀⊤+𝐁⊤=(𝐀+𝐁)⊤"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "87e9ab1d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 4, 10, 16],\n",
       "         [ 4, 10, 16],\n",
       "         [ 4, 10, 16]]),\n",
       " tensor([[ 4, 10, 16],\n",
       "         [ 4, 10, 16],\n",
       "         [ 4, 10, 16]]))"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).reshape(3, 3)\n",
    "B = torch.tensor([[3, 2, 1], [6, 5, 4], [9, 8, 7]]).reshape(3, 3)\n",
    "A.T + B.T, (A + B).T # 矩阵相加必须形状相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e259b17",
   "metadata": {},
   "source": [
    "练习3：给定任意方矩阵 𝐀，𝐀+𝐀⊤ 总是对称的吗?为什么?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "ff213b2b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[ 0,  1,  2,  3],\n",
       "         [ 4,  5,  6,  7],\n",
       "         [ 8,  9, 10, 11],\n",
       "         [12, 13, 14, 15]]),\n",
       " tensor([[ 0,  5, 10, 15],\n",
       "         [ 5, 10, 15, 20],\n",
       "         [10, 15, 20, 25],\n",
       "         [15, 20, 25, 30]]))"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(16).reshape(4, 4)\n",
    "A, A + A.T"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "f5f0dabb",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[0, 1, 2],\n",
       "         [3, 4, 5],\n",
       "         [6, 7, 8]]),\n",
       " tensor([[ 0,  4,  8],\n",
       "         [ 4,  8, 12],\n",
       "         [ 8, 12, 16]]))"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "B = torch.arange(9).reshape(3, 3)\n",
    "B, B + B.T"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e1681e3",
   "metadata": {},
   "source": [
    "练习4：我们在本节中定义了形状（2, 3, 4）的张量 X。len(X)的输出结果是什么？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "7bb2b29c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X  = torch.arange(24).reshape(2, 3, 4)\n",
    "len(X) # 元素个数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aa486bc5",
   "metadata": {},
   "source": [
    "练习5：对于任意形状的张量X, len(X)是否总是对应于X特定轴的长度?这个轴是什么?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "998cc805",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "4"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(12).reshape(4, 3)\n",
    "len(X) # 对应张量X的第一个轴，这个轴是张量内部元素个数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "5e073803",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = torch.arange(24).reshape(3, 4, 2)\n",
    "len(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e4c3cef",
   "metadata": {},
   "source": [
    "练习6：运行 A / A.sum(axis=1)，看看会发生什么。你能分析原因吗？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "293e932c",
   "metadata": {},
   "outputs": [
    {
     "ename": "RuntimeError",
     "evalue": "The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-45-b83e2b3b23df>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0mA\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m12\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mA\u001b[0m \u001b[0;34m/\u001b[0m \u001b[0mA\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m#张量a(3)的大小必须与张量b(4)在非单点维数1上的大小相匹配？\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mRuntimeError\u001b[0m: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1"
     ]
    }
   ],
   "source": [
    "A = torch.arange(12).reshape(4,3)\n",
    "A / A.sum(axis=1) #张量a(3)的大小必须与张量b(4)在非单点维数1上的大小相匹配？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "a943048e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0000, 0.0455, 0.0526, 0.0556],\n",
       "        [0.6667, 0.2273, 0.1579, 0.1296],\n",
       "        [1.3333, 0.4091, 0.2632, 0.2037],\n",
       "        [2.0000, 0.5909, 0.3684, 0.2778]])"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(16).reshape(4, 4)\n",
    "A / A.sum(axis=1) # "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "be0290e1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0000, 0.0833, 0.0952],\n",
       "        [1.0000, 0.3333, 0.2381],\n",
       "        [2.0000, 0.5833, 0.3810]])"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(9).reshape(3, 3)\n",
    "A / A.sum(axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "c5b147b2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0000, 0.1000, 0.2000, 0.3000, 0.4000],\n",
       "        [0.1429, 0.1714, 0.2000, 0.2286, 0.2571],\n",
       "        [0.1667, 0.1833, 0.2000, 0.2167, 0.2333]])"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(15).reshape(3, 5)\n",
    "A / A.sum(axis=1, keepdims=True) # keepdims=True  使两者维度相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "502e132a",
   "metadata": {},
   "source": [
    "练习7：当你在曼哈顿的两点之间旅行时，你需要在坐标上走多远，也就是说，就大街和街道而言？你能斜着走吗？"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e4d2e9f0",
   "metadata": {},
   "source": [
    "    · 无法斜着走，所以距离为 d(i,j) = |x1 - x2| + |y1 - y2|.\n",
    "    · 其中出发点的坐标为（x1, y1），终点的坐标为（x2, y2）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72e38ada",
   "metadata": {},
   "source": [
    "练习8：考虑一个具有形状（2, 3, 4）的张量，在轴 0,1,2 上的求和输出是什么形状?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "0e8fab2a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[ 0.,  1.,  2.,  3.],\n",
       "          [ 4.,  5.,  6.,  7.],\n",
       "          [ 8.,  9., 10., 11.]],\n",
       " \n",
       "         [[12., 13., 14., 15.],\n",
       "          [16., 17., 18., 19.],\n",
       "          [20., 21., 22., 23.]]]),\n",
       " tensor([[12., 14., 16., 18.],\n",
       "         [20., 22., 24., 26.],\n",
       "         [28., 30., 32., 34.]]),\n",
       " tensor([[12., 15., 18., 21.],\n",
       "         [48., 51., 54., 57.]]),\n",
       " tensor([[ 6., 22., 38.],\n",
       "         [54., 70., 86.]]))"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "A = torch.arange(24, dtype=torch.float32).reshape(2, 3, 4)\n",
    "A, A.sum(axis=0), A.sum(axis=1), A.sum(axis=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c37c751d",
   "metadata": {},
   "source": [
    "    · 解释：如果按照0轴，也就是2那个轴，这样剩下的矩阵是（3，4）；同理其它也是这样"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5561f2f",
   "metadata": {},
   "source": [
    "练习9：向 linalg.norm 函数提供 3 个或更多轴的张量，并观察其输出。对于任意形状的张量这个函数计算得到什么?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "bc5924a4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[[ 3.       ,  4.1231055,  5.3851647]],\n",
       "\n",
       "       [[10.816654 , 12.206555 , 13.601471 ]],\n",
       "\n",
       "       [[19.209373 , 20.615528 , 22.022715 ]],\n",
       "\n",
       "       [[27.658634 , 29.068884 , 30.479502 ]]], dtype=float32)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "X = torch.arange(24.0).reshape(4, 2, 3)\n",
    "X_norm = np.linalg.norm(X, ord=None, axis=1, keepdims=True)\n",
    "X_norm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9629a350",
   "metadata": {},
   "source": [
    "    · 输出还是一个三维张量\n",
    "    · 对于这个方法要注意是否ord=None ，若为其它范数，例如 ord = 2， ord = 无穷，其结果都是不同的"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "pytorch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
