{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从pytorch转换的onnx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torchvision.models as models  \n",
    "\n",
    "# 加载预训练的模型  \n",
    "model = models.resnet18(pretrained=True)  \n",
    "model.eval()  \n",
    "\n",
    "# 假设输入是一个大小为 [1, 3, 224, 224] 的张量  \n",
    "dummy_input = torch.randn(1, 3, 224, 224)  \n",
    "\n",
    "# 转换为 ONNX 格式  \n",
    "torch.onnx.export(model, dummy_input, \"resnet18.onnx\",   \n",
    "                  export_params=True,   \n",
    "                  opset_version=10,  \n",
    "                  do_constant_folding=True,  \n",
    "                  input_names=['input'],  \n",
    "                  output_names=['output'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的数据：\n",
    "\n",
    "模型: models.resnet18(pretrained=True) 可以替换为你自己训练的模型。例如，如果你使用的是其他模型结构（如自定义模型），你需要替换成相应的模型。\n",
    "输入张量: torch.randn(1, 3, 224, 224) 需要根据你的模型输入大小进行调整。例如，如果你的模型输入是 1 x 1 x 28 x 28，则应修改为 torch.randn(1, 1, 28, 28)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型优化，使用onnxruntime进行推理\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import onnx  \n",
    "import onnxruntime as ort  \n",
    "\n",
    "# 加载 ONNX 模型  \n",
    "onnx_model = onnx.load(\"resnet18.onnx\")  \n",
    "\n",
    "# 创建 Inference Session  \n",
    "ort_session = ort.InferenceSession(\"resnet18.onnx\")  \n",
    "\n",
    "# 准备输入数据  \n",
    "import numpy as np  \n",
    "input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)  \n",
    "\n",
    "# 进行推理  \n",
    "outputs = ort_session.run(None, {\"input\": input_data})  \n",
    "print(outputs[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的数据：\n",
    "\n",
    "文件名: onnx.load(\"resnet18.onnx\") 需要更改为你保存的 ONNX 模型的文件名。\n",
    "输入数据: input_data 的形状（如 `np.random.randn(1, 3, 224, 224)）需要与模型输入形状匹配。如果你的模型期望其他形状的输入，需对应修改。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在Flask中部署onnx模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from flask import Flask, request, jsonify  \n",
    "import onnxruntime as ort  \n",
    "import numpy as np  \n",
    "\n",
    "app = Flask(__name__)  \n",
    "\n",
    "# 创建 ONNX Runtime Session  \n",
    "ort_session = ort.InferenceSession(\"resnet18.onnx\")  \n",
    "\n",
    "@app.route('/predict', methods=['POST'])  \n",
    "def predict():  \n",
    "    # 获取输入数据  \n",
    "    data = request.json['input']  \n",
    "    input_data = np.array(data).astype(np.float32).reshape(1, 3, 224, 224)  \n",
    "    \n",
    "    # 推理  \n",
    "    outputs = ort_session.run(None, {\"input\": input_data})  \n",
    "    return jsonify(outputs[0].tolist())  \n",
    "\n",
    "if __name__ == '__main__':  \n",
    "    app.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的数据：\n",
    "\n",
    "文件名: ort_session = ort.InferenceSession(\"resnet18.onnx\") 需要改为你的 ONNX 模型文件名。\n",
    "输入处理: data = request.json['input'] 和 input_data 的处理需要确保你的前端请求发送的数据格式正确，并且输入数组的形状和数据类型符合你的模型要求。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Flask 框架应用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from flask import Flask, request, jsonify  \n",
    "import numpy as np  \n",
    "import onnxruntime as ort  \n",
    "\n",
    "app = Flask(__name__)  \n",
    "\n",
    "# 加载 ONNX 模型  \n",
    "model_path = \"your_model.onnx\"  # 替换为你的模型文件名  \n",
    "ort_session = ort.InferenceSession(model_path)  \n",
    "\n",
    "@app.route('/predict', methods=['POST'])  \n",
    "def predict():  \n",
    "    data = request.json['input']  # 从请求中获取输入数据  \n",
    "    # 假设输入数据是一个列表，形状为 [1, 3, 224, 224]  \n",
    "    input_data = np.array(data).astype(np.float32).reshape(1, 3, 224, 224)  \n",
    "    \n",
    "    # 执行推理  \n",
    "    outputs = ort_session.run(None, {\"input\": input_data})  \n",
    "    return jsonify(outputs[0].tolist())  \n",
    "\n",
    "if __name__ == '__main__':  \n",
    "    app.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的部分：\n",
    "\n",
    "model_path = \"your_model.onnx\"：替换为实际的 ONNX 模型文件名和路径。\n",
    "input_data 的形状：reshape(1, 3, 224, 224) 需要根据你模型的实际输入要求进行调整。若模型需要其他输入形状（如 1 x 1 x 28 x 28），请相应修改。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. 模型压缩（示例：剪枝）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torch.nn.utils.prune as prune  \n",
    "\n",
    "# 假设 model 是你定义的 PyTorch 模型  \n",
    "# 示例中使用的是 CNN 层的剪枝  \n",
    "prune.random_unstructured(model.conv1, name=\"weight\", amount=0.3)  # 剪去 30% 的权重"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的部分：\n",
    "\n",
    "model：确保你在代码中定义了自己的 PyTorch 模型，并能访问到你需要剪枝的层。\n",
    "model.conv1：根据你自己模型的结构修改为需要剪枝的具体层。如果你的模型没有 conv1 层，你需要指定相应的卷积层或其他类型的层。\n",
    "amount=0.3：根据需要剪去的权重比例（如 30%）进行调整。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. 模型量化（示例：动态量化）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.quantization  \n",
    "\n",
    "# 假设 model 是你定义的 PyTorch 模型  \n",
    "# 使用动态量化  \n",
    "model.eval()  # 切换为评估模式  \n",
    "quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)  # 只量化全连接层"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的部分：\n",
    "\n",
    "model：确保你已经定义了需要量化的 PyTorch 模型。\n",
    "{torch.nn.Linear}：这一部分指定了要量化的层类型。如果你的模型有其他层需要量化（如卷积层），可以在这里更改为相应的层类型。\n",
    "dtype=torch.qint8：如果要使用其他量化类型，可以根据需求进行修改，例如 torch.float16 或其他自定义设置。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "4. 模型输出后处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def post_process(output):  \n",
    "    # 假设输出是 logits，使用 softmax 将其转换为概率分布  \n",
    "    probabilities = torch.nn.functional.softmax(torch.tensor(output), dim=1)  \n",
    "    predicted_classes = torch.argmax(probabilities, dim=1)  # 获取预测类别  \n",
    "    return predicted_classes.tolist()  # 返回类别列表"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的部分：\n",
    "\n",
    "output：确保该参数传入的是模型输出的 logits（输出数据），你可能需要适当地调用此函数，例如传入 ONNX 模型的输出。\n",
    "dim=1：如果你的模型输出的维度与预期不同，你可能需要根据实际维度调整这个参数。\n",
    "返回的 predicted_classes 的处理，可以根据你的实际需求进行修改，比如返回更详细的信息或不同格式的数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Available execution providers: ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\n"
     ]
    }
   ],
   "source": [
    "import onnxruntime as ort  \n",
    "\n",
    "# 列出可用的执行提供程序  \n",
    "available_providers = ort.get_available_providers()  \n",
    "print(\"Available execution providers:\", available_providers)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorchgpu",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
