{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "048ad798",
   "metadata": {
    "origin_pos": 0
   },
   "source": [
    "# 现代卷积神经网络\n",
    ":label:`chap_modern_cnn`\n",
    "\n",
    "上一章我们介绍了卷积神经网络的基本原理，本章将介绍现代的卷积神经网络架构，许多现代卷积神经网络的研究都是建立在这一章的基础上的。\n",
    "在本章中的每一个模型都曾一度占据主导地位，其中许多模型都是ImageNet竞赛的优胜者。ImageNet竞赛自2010年以来，一直是计算机视觉中监督学习进展的指向标。\n",
    "\n",
    "这些模型包括：\n",
    "\n",
    "- AlexNet。它是第一个在大规模视觉竞赛中击败传统计算机视觉模型的大型神经网络；\n",
    "- 使用重复块的网络（VGG）。它利用许多重复的神经网络块；\n",
    "- 网络中的网络（NiN）。它重复使用由卷积层和$1\\times 1$卷积层（用来代替全连接层）来构建深层网络;\n",
    "- 含并行连结的网络（GoogLeNet）。它使用并行连结的网络，通过不同窗口大小的卷积层和最大汇聚层来并行抽取信息；\n",
    "- 残差网络（ResNet）。它通过残差块构建跨层的数据通道，是计算机视觉中最流行的体系架构；\n",
    "- 稠密连接网络（DenseNet）。它的计算成本很高，但给我们带来了更好的效果。\n",
    "\n",
    "虽然深度神经网络的概念非常简单——将神经网络堆叠在一起。但由于不同的网络架构和超参数选择，这些神经网络的性能会发生很大变化。\n",
    "本章介绍的神经网络是将人类直觉和相关数学见解结合后，经过大量研究试错后的结晶。\n",
    "我们会按时间顺序介绍这些模型，在追寻历史的脉络的同时，帮助培养对该领域发展的直觉。这将有助于研究开发自己的架构。\n",
    "例如，本章介绍的批量规范化（batch normalization）和残差网络（ResNet）为设计和训练深度神经网络提供了重要思想指导。\n",
    "\n",
    ":begin_tab:toc\n",
    " - [alexnet](alexnet.ipynb)\n",
    " - [vgg](vgg.ipynb)\n",
    " - [nin](nin.ipynb)\n",
    " - [googlenet](googlenet.ipynb)\n",
    " - [batch-norm](batch-norm.ipynb)\n",
    " - [resnet](resnet.ipynb)\n",
    " - [densenet](densenet.ipynb)\n",
    ":end_tab:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "09d114fc-4c9a-4446-a92b-6a6d2491aeb7",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/python3.8/lib/python3.8/site-packages/torch_npu/utils/path_manager.py:79: UserWarning: Warning: The /usr/local/Ascend/ascend-toolkit/latest owner does not match the current user.\n",
      "  warnings.warn(f\"Warning: The {path} owner does not match the current user.\")\n",
      "/usr/local/python3.8/lib/python3.8/site-packages/torch_npu/utils/path_manager.py:79: UserWarning: Warning: The /usr/local/Ascend/ascend-toolkit/8.0.RC1/aarch64-linux/ascend_toolkit_install.info owner does not match the current user.\n",
      "  warnings.warn(f\"Warning: The {path} owner does not match the current user.\")\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Total NPU memory: 62432.00 MB\n",
      "Allocated NPU memory: 0.00 MB\n",
      "Cached NPU memory: 0.00 MB\n",
      "Free NPU memory: 62432.00 MB\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch_npu\n",
    "\n",
    "# 检查是否有可用的 NPU 设备\n",
    "if torch_npu.npu.is_available():\n",
    "    # 获取默认 NPU 设备\n",
    "    device = torch_npu.npu.current_device()\n",
    "\n",
    "    # 获取 NPU 专用内存信息\n",
    "    total_memory = torch_npu.npu.get_device_properties(device).total_memory\n",
    "    allocated_memory = torch_npu.npu.memory_allocated(device)\n",
    "    cached_memory = torch_npu.npu.memory_reserved(device)\n",
    "    free_memory = total_memory - allocated_memory\n",
    "\n",
    "    print(f\"Total NPU memory: {total_memory / 1024 ** 2:.2f} MB\")\n",
    "    print(f\"Allocated NPU memory: {allocated_memory / 1024 ** 2:.2f} MB\")\n",
    "    print(f\"Cached NPU memory: {cached_memory / 1024 ** 2:.2f} MB\")\n",
    "    print(f\"Free NPU memory: {free_memory / 1024 ** 2:.2f} MB\")\n",
    "else:\n",
    "    print(\"No NPU devices available.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2e588a4b-025b-4592-9e5b-15ed7a6f6eca",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.19"
  },
  "required_libs": []
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
