{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "881e1c91",
   "metadata": {},
   "source": [
    "# CMA-ES_MGDA算法复现\n",
    "## 环境安装\n",
    "本案例要求 MindSpore >= 2.0.0 版本以调用如下接口: mindspore.jit, mindspore.jit_class, mindspore.data_sink。具体请查看MindSpore安装。\n",
    "\n",
    "此外，你需要安装 MindFlow >=0.1.0 版本。如果当前环境还没有安装，请按照下列方式选择后端和版本进行安装。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1437a2b2",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "mindflow_version = \"0.1.0\"  # update if needed\n",
    "# GPU Comment out the following code if you are using NPU.\n",
    "!pip uninstall -y mindflow-gpu\n",
    "!pip install mindflow-gpu==$mindflow_version\n",
    "\n",
    "# NPU Uncomment if needed.\n",
    "# !pip uninstall -y mindflow-ascend\n",
    "# !pip install mindflow-ascend==$mindflow_version"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aab55f7b",
   "metadata": {},
   "source": [
    "## 概述\n",
    "在PINNS网络的训练过程中，经常出现高度非线性、非凸的优化问题，本算法将免梯度优化算法—— CMA-ES和多目标梯度优化算法（mgda）相结合，克服优化中的高度非凸、梯度异常难题。\n",
    "## 技术路径\n",
    "MindFlow求解该问题的具体流程如下：\n",
    "1. 创建数据集\n",
    "2. 构建神经网络模型、优化器\n",
    "3. 构建cma-es算法\n",
    "4. 模型训练\n",
    "## 引入代码包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d1a65f89",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "import random\n",
    "import time\n",
    "import os\n",
    "import argparse\n",
    "\n",
    "import cma\n",
    "import numpy as np\n",
    "\n",
    "from mindspore import context, nn, set_seed, save_checkpoint, data_sink\n",
    "from mindspore.amp import DynamicLossScaler, auto_mixed_precision\n",
    "from mindflow.utils import load_yaml_config\n",
    "\n",
    "from src import flatten_grads\n",
    "from src import create_dataset, create_model\n",
    "from src import get_losses, evaluate, visual\n",
    "from src import resize_array, flatten_grads\n",
    "from src import get_train_loss_step\n",
    "\n",
    "set_seed(0)\n",
    "random.seed(0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7490423f",
   "metadata": {},
   "source": [
    "进行参数配置，其中--case有三种选择，\"burgers\"表示对burgers方程进行训练，\"cylinder_flow\"表示对navier_stokes2D方程的圆柱绕流数据集进行训练，\"periodic_hill\"表示对雷诺平均Navier-Stokes方程的山流动数据集进行训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5805485",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "parser = argparse.ArgumentParser(description=\"train cma-es_mgda\")\n",
    "parser.add_argument(\"--case\", type=str, default=\"burgers\", choices=[\"burgers\", \"cylinder_flow\", \"periodic_hill\"],\n",
    "                    help=\"choose burgers or cylinder_flow\")\n",
    "parser.add_argument(\"--mode\", type=str, default=\"GRAPH\", choices=[\"GRAPH\", \"PYNATIVE\"],\n",
    "                    help=\"Running in GRAPH_MODE OR PYNATIVE_MODE\")\n",
    "parser.add_argument(\"--device_target\", type=str, default=\"Ascend\", choices=[\"GPU\", \"Ascend\"],\n",
    "                    help=\"The target device to run, support 'Ascend', 'GPU'\")\n",
    "parser.add_argument(\"--device_id\", type=int, default=0,\n",
    "                    help=\"ID of the target device\")\n",
    "parser.add_argument(\"--config_file_path\", type=str,\n",
    "                    default=\"./cma_es_mgda.yaml\")\n",
    "args = parser.parse_args()\n",
    "\n",
    "context.set_context(mode=context.GRAPH_MODE if args.mode.upper().startswith(\"GRAPH\")\n",
    "                    else context.PYNATIVE_MODE,\n",
    "                    save_graphs=args.save_graphs,\n",
    "                    save_graphs_path=args.save_graphs_path,\n",
    "                    device_target=args.device_target,\n",
    "                    device_id=args.device_id)\n",
    "print(\n",
    "    f\"Running in {args.mode.upper()} mode, using device id: {args.device_id}.\")\n",
    "use_ascend = context.get_context(attr_key='device_target') == \"Ascend\"\n",
    "start_time = time.time()\n",
    "print(use_ascend)\n",
    "print(\"pid:\", os.getpid())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b43412d",
   "metadata": {},
   "source": [
    "确定要训练的方程的、进行yaml文件的加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6fefe7a4",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "# get case name\n",
    "case_name = args.case\n",
    "\n",
    "# load configurations\n",
    "config = load_yaml_config(args.config_file_path)\n",
    "\n",
    "model_config = config[\"model\"]\n",
    "optimizer_config = config[\"optimizer\"]\n",
    "cma_es_config = config[\"cmaes\"]\n",
    "mgda_config = config[\"mgda\"]\n",
    "summary_config = config[\"summary\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "772f3ccc",
   "metadata": {},
   "source": [
    "## 创建数据集\n",
    "下面提供三个案例的数据集讲解，在使用本代码过程中，想要训练哪个数据集就下载哪个数据集并将其放置于src文件夹下。\n",
    "\n",
    "其中train_dataset用于多目标梯度过程，loss_dataset用于cma-es训练过程中得出解的适应值，inputs、label为测试集。\n",
    "### burgers数据集\n",
    "burgers数据集根据求解域、初始条件及边值条件进行随机采样，生成训练数据集与测试数据集。\n",
    "### Periodic_hill数据集\n",
    "数据格式为numpy的npy，维度为：[300，700， 10]。其中前两个维度分别为流场的长和宽，最后维度为包含（x, y, u, v, p, uu, uv, vv, rho, nu）共计10个变量。其中，x, y, u, v, p分别为流场的x坐标、y坐标、x方向速度、y方向速度、压力；uu, uv, vv雷诺平均统计量；rho为流体密度，nu为运动粘性系数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6cee2c49",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "# create dataset for training, calculating loss and testing\n",
    "train_dataset, loss_dataset, inputs, label = create_dataset(\n",
    "    case_name, config)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fae27d40",
   "metadata": {},
   "source": [
    "## 构建模型\n",
    "本例使用简单的全连接网络，深度为5层，激发函数为tanh函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f0cb2d7",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "# define models\n",
    "model = create_model(case_name, config)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "daad1584",
   "metadata": {},
   "source": [
    "## 模型训练\n",
    "使用MindSpore >= 2.0.0的版本，可以使用函数式编程范式训练神经网络。\n",
    "\n",
    "此处采用进化算法CMA-ES从高斯分布中筛选模型参数，并且对筛选出的参数挑选后代进行多目标梯度下降。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "02f737a8",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "'''Train and evaluate the network'''\n",
    "# get train epochs\n",
    "epochs = optimizer_config[\"train_epochs\"]\n",
    "\n",
    "# define optimizer\n",
    "optimizer = nn.Adam(model.trainable_params(),\n",
    "                    optimizer_config[\"initial_lr\"])\n",
    "\n",
    "# set ascend\n",
    "if use_ascend:\n",
    "    loss_scaler = DynamicLossScaler(1024, 2, 100)\n",
    "    auto_mixed_precision(model, model_config[\"amp_level\"])\n",
    "else:\n",
    "    loss_scaler = None\n",
    "\n",
    "# define train_step and loss_step\n",
    "train_step, loss_step = get_train_loss_step(use_ascend=use_ascend, case_name=case_name,\n",
    "                                            config=config, model=model, optimizer=optimizer, loss_scaler=loss_scaler)\n",
    "grad_sink_process = data_sink(train_step, train_dataset, sink_size=1)\n",
    "loss_sink_process = data_sink(loss_step, loss_dataset, sink_size=1)\n",
    "\n",
    "\n",
    "# define cma-es\n",
    "# prepare initial params for cma-es\n",
    "params, shapes = flatten_grads(optimizer.parameters)\n",
    "\n",
    "# define cma-es strategy\n",
    "cma_es = cma.CMAEvolutionStrategy(\n",
    "    params, cma_es_config[\"std\"], {'seed': 0})\n",
    "\n",
    "# create ckpt dir\n",
    "if not os.path.exists(os.path.abspath(summary_config[\"save_ckpt_path\"])):\n",
    "    os.makedirs(os.path.abspath(summary_config[\"save_ckpt_path\"]))\n",
    "\n",
    "# train loop for cma-es and mgda\n",
    "for epoch in range(1, epochs + 1):\n",
    "    # set begin time\n",
    "    time_beg = time.time()\n",
    "\n",
    "    # set model to train mode\n",
    "    model.set_train(True)\n",
    "\n",
    "    # step1: ask solutions\n",
    "    solutions = cma_es.ask()\n",
    "\n",
    "    # step2: random choose some solutions and apply multi-gradient descent algorithm to chosen solutions\n",
    "    steps_per_epochs = train_dataset.get_dataset_size()\n",
    "    mgda_number = round(mgda_config[\"ratio\"] * cma_es.popsize)\n",
    "    popn_list = np.arange(0, mgda_number, 1).tolist()\n",
    "    random_index = random.sample(popn_list, mgda_number)\n",
    "    for index in random_index:\n",
    "        tmp_params = solutions[index]\n",
    "        optimizer.parameters = resize_array(tmp_params, shapes)\n",
    "        # apply multi-gradient descent algorithm\n",
    "        for _ in range(steps_per_epochs):\n",
    "            grad_sink_process()\n",
    "        solutions[index] = flatten_grads(optimizer.parameters)[0]\n",
    "\n",
    "    # get corresponding losses\n",
    "    losses = get_losses(loss_sink_process, solutions, model)\n",
    "\n",
    "    # step3: tell cma-es the losses of solutions\n",
    "    cma_es.tell(solutions, losses)\n",
    "\n",
    "    # current best params\n",
    "    best_params = cma_es.best.x\n",
    "\n",
    "    # current best loss\n",
    "    step_train_loss = cma_es.best.f\n",
    "\n",
    "    # current epoch loss\n",
    "    print(\n",
    "        f\"epoch: {epoch} train loss: {step_train_loss} epoch time: {(time.time() - time_beg) * 1000 :.3f}ms\")\n",
    "\n",
    "    # set model to eval mode\n",
    "    model.set_train(False)\n",
    "\n",
    "    # evaluate best_params if current epoch reaches setting\n",
    "    if epoch % summary_config[\"eval_interval_epochs\"] == 0:\n",
    "        model.trainable_parameters = best_params\n",
    "        evaluate(model, inputs, label, config)\n",
    "\n",
    "    # save checkpoint\n",
    "    if epoch % summary_config[\"save_checkpoint_epochs\"] == 0:\n",
    "        ckpt_name = \"ns-{}.ckpt\".format(epoch + 1)\n",
    "        model.trainable_parameters = best_params\n",
    "        save_checkpoint(model, os.path.join(\n",
    "            summary_config[\"save_ckpt_path\"], ckpt_name))\n",
    "print(\"End-to-End total time: {} s\".format(time.time() - start_time))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66e5df51",
   "metadata": {},
   "source": [
    "## 模型推理及可视化\n",
    "训练后可对流场内所有数据点进行推理，并可视化相关结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "42b3664e",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "model.trainable_parameters = best_params\n",
    "visual(case_name, model, config, inputs, label)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
