{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##  Pytorch Dataloader 效率问题\n",
    "pytorch 没有比较高效的数据存储, 例如读取图片，需要cv.imread，利用opencv来读取，非常耗时。\n",
    "读取txt文件也是如此，比如将点云数据存储为文本格式，读取时间同样也是非常耗时的。\n",
    "这里可以利用lmdb，h5py,pth,lmdb,n5\n",
    "\n",
    "当时看到了一个还不错的github链接：\n",
    "\n",
    "\n",
    "\n",
    "https://github.com/Lyken17/Efficient-PyTorch\n",
    "\n",
    "\n",
    "\n",
    "主要是讲如何使用lmdb，h5py,pth,lmdb,n5等数据存储方式皆可以。\n",
    "\n",
    "\n",
    "\n",
    "个人的感受是，h5在数据调用上比较快，但是如果要使用多线程读写，就尽量不要使用h5,因为h5的多线程读写好像比较麻烦。\n",
    "\n",
    "\n",
    "\n",
    "http://docs.h5py.org/en/stable/mpi.html\n",
    "\n",
    "\n",
    "\n",
    "这里贴一下h5数据的读写代码(主要需要注意的是字符串的读写需要encode,decode,最好用create_dataset，直接写的话读的时候会报错)：\n",
    "\n",
    "\n",
    "\n",
    "写：\n",
    "```python\n",
    "    imagenametotal_.append(os.path.join('images', imagenametotal).encode())\n",
    "    with h5py.File(outfile) as f:\n",
    "        f.create_dataset('imagename', data=imagenametotal_)\n",
    "        f['part'] = parts_\n",
    "        f['S'] = Ss_\n",
    "        f['image'] = cvimgs\n",
    "```\n",
    "\n",
    "读：\n",
    "\n",
    "```python\n",
    "\n",
    "with h5py.File(outfile) as f:\n",
    "    imagename = [x.decode() for x in f['imagename']]\n",
    "    kp2ds = np.array(f['part'])\n",
    "    kp3ds = np.array(f['S'])\n",
    "    cvimgs = np.array(f['image'])\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 多loss 函数问题\n",
    "这个好像没有遇到过。\n",
    "\n",
    "当loss函数有多个组成的时候，比如 loss = loss1 + loss2 + loss3\n",
    "\n",
    "\n",
    "\n",
    "那么需要把这三个loss写到一个class中，然后再forward里面将其加起来。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## gpu利用率不高+gpu现存占用浪费\n",
    "\n",
    "\n",
    "\n",
    "常用配置：\n",
    "\n",
    "\n",
    "\n",
    "（1）主函数前面加：（这个会牺牲一点点显存提高模型精度）\n",
    "\n",
    "\n",
    "```python\n",
    "cudnn.benchmark = True\n",
    "torch.backends.cudnn.deterministic = False\n",
    "torch.backends.cudnn.enabled = True\n",
    "```\n",
    "\n",
    "（2）训练时，epoch前面加：（定期清空模型，效果感觉不明显）\n",
    "\n",
    "\n",
    "\n",
    "`torch.cuda.empty_cache()`\n",
    "\n",
    "\n",
    "（3）无用变量前面加：（同上，效果某些操作上还挺明显的）\n",
    "\n",
    "\n",
    "\n",
    "`del xxx(变量名)`\n",
    "\n",
    "\n",
    "（4）dataloader的长度_len_设置：（dataloader会间歇式出现卡顿，设置成这样会避免不少）\n",
    "\n",
    "\n",
    "```python\n",
    "def __len__(self):\n",
    "    return self.images.shape[0]\n",
    "```\n",
    "\n",
    "（5）dataloader的预加载设置：（会在模型训练的时候加载数据，提高一点点gpu利用率）\n",
    "\n",
    "\n",
    "```python\n",
    "train_loader = torch.utils.data.DataLoader(\n",
    "        train_dataset,\n",
    "        pin_memory=True,\n",
    "    )\n",
    "```\n",
    "\n",
    "（6）网络设计很重要，外加不要初始化任何用不到的变量，因为pyroch的初始化和forward是分开的，他不会因为你不去使用，而不去初始化。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##  nn.Module.cuda() 和 Tensor.cuda() 的作用效果差异\n",
    "\n",
    "无论是对于模型还是数据，cuda()函数都能实现从CPU到GPU的内存迁移，但是他们的作用效果有所不同。\n",
    "\n",
    "对于nn.Module:\n",
    "\n",
    "    model = model.cuda() \n",
    "    model.cuda() \n",
    "上面两句能够达到一样的效果，即对model自身进行的内存迁移。\n",
    "\n",
    "对于Tensor:\n",
    "\n",
    "和nn.Module不同，调用tensor.cuda()只是返回这个tensor对象在GPU内存上的拷贝，而不会对自身进行改变。因此必须对tensor进行重新赋值，即tensor=tensor.cuda().\n",
    "\n",
    "例子:\n",
    "```python\n",
    "model = create_a_model() \n",
    "tensor = torch.zeros([2,3,10,10]) \n",
    "model.cuda() \n",
    "tensor.cuda() \n",
    "model(tensor)    # 会报错 \n",
    "tensor = tensor.cuda() \n",
    "model(tensor)    # 正常运行 \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## loss.item() 转换为数字\n",
    "\n",
    "以广泛使用的模式total_loss += loss.data[0]为例。Python0.4.0之前，loss是一个封装了(1,)张量的Variable，但Python0.4.0的loss现在是一个零维的标量。对标量进行索引是没有意义的（似乎会报 invalid index to scalar variable 的错误）。使用loss.item()可以从标量中获取Python数字。所以改为：\n",
    "\n",
    "total_loss += loss.item() \n",
    "\n",
    "\n",
    "如果在累加损失时未将其转换为Python数字，则可能出现程序内存使用量增加的情况。这是因为上面表达式的右侧原本是一个Python浮点数，而它现在是一个零维张量。因此，总损失累加了张量和它们的梯度历史，这可能会产生很大的autograd 图，耗费内存和计算资源。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## torch.Tensor.detach()的使用\n",
    "\n",
    "detach()的官方说明如下：\n",
    "\n",
    "Returns a new Tensor, detached from the current graph. The result will never require gradient.\n",
    "假设有模型A和模型B，我们需要将A的输出作为B的输入，但训练时我们只训练模型B. 那么可以这样做：\n",
    "\n",
    "input_B = output_A.detach()\n",
    "它可以使**两个计算图的梯度传递断开**，从而实现我们所需的功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## pytorch中loss函数的参数设置\n",
    "\n",
    "以CrossEntropyLoss为例：\n",
    "\n",
    "CrossEntropyLoss(self, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='elementwise_mean') \n",
    "若 reduce = False，那么 size_average 参数失效，直接返回向量形式的 loss，即batch中每个元素对应的loss.\n",
    "若 reduce = True，那么 loss 返回的是标量：\n",
    "如果 size_average = True，返回 loss.mean().\n",
    "如果 size_average = False，返回 loss.sum().\n",
    "weight : 输入一个1D的权值向量，为各个类别的loss加权\n",
    "\n",
    "## 在做一些metric learning时候，可能提高模型准确率的技巧\n",
    "\n",
    "if ep < 50:\n",
    "   lr = 1e-4*(ep//5+1)\n",
    " elif ep < 200:\n",
    "   lr = 1e-3\n",
    " elif ep < 300:\n",
    "    lr = 1e-4\n",
    "结论：在2月份新加了2行代码,简单来说就是在前50个epoch 用较低的learning rate 去预热，后面慢慢恢复正常的lr\n",
    "\n",
    "## 多GPU的问题\n",
    "\n",
    "### 使用nn.Dataparallel 数据不在同一个gpu 上\n",
    "\n",
    "背景：pytorch 多GPU训练主要是采用数据并行方式：\n",
    "model = nn.DataParallel(model) \n",
    "问题：但是一次同事训练基于光流检测的实验时发现 data not in same cuda,做代码review时候，打印每个节点tensor，cuda里的数据竟然没有分布在同一个gpu上\n",
    "解决：最终解决方案是在数据，吐出后统一进行执行.cuda()将数据归入到同一个cuda流中解决了该问题。\n",
    "\n",
    "### pytorch model load可能会踩到的坑：\n",
    "\n",
    "如果使用了nn.Dataparallel 进行多卡训练在读入模型时候要注意加.module， 代码如下:\n",
    "\n",
    "def get_model(self):\n",
    "  if self.nGPU == 1:         \n",
    "      return self.model     \n",
    "  else:         \n",
    "      return self.model.module "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Permute\n",
    "1 先看看官方中英文doc：\n",
    "\n",
    "  torch.Tensor.permute (Python method, in torch.Tensor)\n",
    "1.1 permute(dims)\n",
    "\n",
    "将tensor的维度换位。\n",
    "\n",
    "参数： - __dims__ (int ..*) - 换位顺序\n",
    "\n",
    "例：\n",
    "```\n",
    ">>> x = torch.randn(2, 3, 5) \n",
    ">>> x.size() \n",
    "torch.Size([2, 3, 5]) \n",
    ">>> x.permute(2, 0, 1).size() \n",
    "torch.Size([5, 2, 3])\n",
    "1.2 permute(*dims) → Tensor\n",
    "```\n",
    "Permute the dimensions of this tensor.\n",
    "\n",
    "Parameters： *dims (int...) – The desired ordering of dimensions\n",
    "\n",
    "Example：\n",
    "```\n",
    ">>> x = torch.randn(2, 3, 5) \n",
    ">>> x.size() \n",
    "torch.Size([2, 3, 5]) \n",
    ">>> x.permute(2, 0, 1).size() \n",
    "torch.Size([5, 2, 3])\n",
    "2 pytorch permute的使用\n",
    "```\n",
    "permute函数功能还是比较简单的，下面主要介绍几个细节点：\n",
    "\n",
    "2.1 transpose与permute的异同\n",
    "\n",
    "Tensor.permute(a,b,c,d, ...)：permute函数可以对任意高维矩阵进行转置，但没有 torch.permute() 这个调用方式， 只能 Tensor.permute()：\n",
    "```\n",
    ">>> torch.randn(2,3,4,5).permute(3,2,0,1).shape\n",
    "torch.Size([5, 4, 2, 3])\n",
    "torch.transpose(Tensor, a,b)：transpose只能操作2D矩阵的转置，有两种调用方式；\n",
    "```\n",
    "另：连续使用transpose也可实现permute的效果：\n",
    "```\n",
    ">>> torch.randn(2,3,4,5).transpose(3,0).transpose(2,1).transpose(3,2).shape\n",
    "torch.Size([5, 4, 2, 3])\n",
    ">>> torch.randn(2,3,4,5).transpose(1,0).transpose(2,1).transpose(3,1).shape\n",
    "torch.Size([3, 5, 2, 4])\n",
    "```\n",
    "从以上操作中可知，permute相当于可以同时操作于tensor的若干维度，transpose只能同时作用于tensor的两个维度；\n",
    "\n",
    "2.2 permute函数与contiguous、view函数之关联\n",
    "\n",
    "contiguous：view只能作用在contiguous的variable上，如果在view之前调用了transpose、permute等，就需要调用contiguous()来返回一个contiguous copy；\n",
    "\n",
    "一种可能的解释是：有些tensor并不是占用一整块内存，而是由不同的数据块组成，而tensor的view()操作依赖于内存是整块的，这时只需要执行contiguous()这个函数，把tensor变成在内存中连续分布的形式；\n",
    "\n",
    "判断ternsor是否为contiguous，可以调用torch.Tensor.is_contiguous()函数:\n",
    "```\n",
    "import torch \n",
    "x = torch.ones(10, 10) \n",
    "x.is_contiguous()                                 # True \n",
    "x.transpose(0, 1).is_contiguous()                 # False\n",
    "x.transpose(0, 1).contiguous().is_contiguous()    # True\n",
    "```\n",
    "另：在pytorch的最新版本0.4版本中，增加了torch.reshape()，与 numpy.reshape() 的功能类似，大致相当于 tensor.contiguous().view()，这样就省去了对tensor做view()变换前，调用contiguous()的麻烦；\n",
    "\n",
    "3 permute与view函数功能demo\n",
    "```\n",
    "import torch\n",
    "import numpy as np\n",
    "\n",
    "a=np.array([[[1,2,3],[4,5,6]]])\n",
    "unpermuted=torch.tensor(a)\n",
    "print(unpermuted.size())              #  ——>  torch.Size([1, 2, 3])\n",
    "\n",
    "permuted=unpermuted.permute(2,0,1)\n",
    "print(permuted.size())                #  ——>  torch.Size([3, 1, 2])\n",
    "\n",
    "view_test = unpermuted.view(1,3,2)\n",
    "print(view_test.size())               #  ——>  torch.Size([1, 3, 2])\n",
    "```\n",
    "利用函数 permute(2,0,1) 可以把 Tensor([[[1,2,3],[4,5,6]]]) 转换成：\n",
    "\n",
    "tensor([[[ 1,  4]],\n",
    "        [[ 2,  5]],\n",
    "        [[ 3,  6]]])     # print(permuted)    \n",
    "如果使用view(1,3,2) 可以得到：\n",
    "\n",
    "tensor([[[ 1,  2],\n",
    "         [ 3,  4],\n",
    "         [ 5,  6]]])   # print(view_test)\n",
    "5 参考\n",
    "\n",
    "https://zhuanlan.zhihu.com/p/64376950\n",
    "\n",
    "https://pytorch.org/docs/stable/tensors.html?highlight=permute#torch.Tensor.permute\n",
    "\n",
    "https://pytorch-cn.readthedocs.io/zh/latest/package_references/Tensor/#permutedims\n",
    "\n",
    "发布于 2019-08-09"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
