{
    "cells": [
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "# DAE-PINN\n",
                "\n",
                "## 环境安装\n",
                "\n",
                "本案例要求 **MindSpore >= 2.5.0** 版本以调用如下接口: *mindspore.jit, mindspore.jit_class, mindspore.data_sink*。具体请查看[MindSpore安装](https://www.mindspore.cn/install)。\n",
                "\n",
                "此外，你需要安装 **MindScience >=0.1.0** 版本。如果当前环境还没有安装，请按照下列方式选择后端和版本进行安装。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "mindscience_version = \"0.1.0\"  # update if needed\n",
                "# Comment out the following code if you are using NPU.\n",
                "!pip uninstall -y mindscience-ascend\n",
                "!pip install mindscience-ascend==$mindscience_version\n",
                "\n",
                "# NPU Uncomment if needed.\n",
                "# !pip uninstall -y mindscience-ascend\n",
                "# !pip install mindscience-ascend==$mindscience_version"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 概述\n",
                "\n",
                "* **电力网络动态安全评估需求** ：随着电力网络中分布式能源资源的整合、市场自由化以及复杂通信和控制算法的采用，电力网络的运行条件和潜在故障场景变得更加多样化，影响其安全性。为了评估电力网络的动态安全性，需要模拟其在面对单一故障时的动态响应，这需要求解一组非线性微分代数方程（DAE），而传统显式积分方案在求解 DAE 时会失败，商业求解器计算成本高、内存需求大，限制了动态安全评估的在线部署。\n",
                "* **深度学习在科学和工程领域的潜力与挑战** ：尽管深度学习在计算机视觉和自然语言处理等领域取得了巨大成功，但在学习科学和工程动态系统方面应用有限，因为数据收集成本高昂，且大多数传统深度学习方法在数据量有限的情况下缺乏鲁棒性和泛化能力。\n",
                "\n",
                "## 工作原理\n",
                "\n",
                "* DAE-PINNs 框架结合了隐式龙格 - 库塔时间步进方案（专为求解 DAE 设计）和物理信息神经网络（PINN）。在时间步进过程中，假设已积分至 $(t_n, y_n, z_n)$，目标是推进至 $(t_{n+1}, y_{n+1}, z_{n+1})$，应用隐式龙格 - 库塔方案后，得到一系列方程，包括内部阶段的更新公式和最终状态的更新公式。\n",
                "* 通过惩罚方法强制神经网络满足 DAE 作为近似硬约束。在训练过程中，将 DAE 的残差作为损失函数的一部分，使得网络在学习过程中不仅拟合数据，还能满足物理定律所描述的 DAE 方程，从而将物理信息融入到神经网络的学习过程中。\n",
                "\n",
                "## 方法细节\n",
                "\n",
                "* **问题设置** ：DAE 以半显式形式给出，包括动态状态 y 和代数变量 z，以及描述微分方程的 f 和代数方程的 g。假设 f 和 g 具有足够高的可微性，并且 DAE 的索引为 1，即雅可比矩阵 g_z 的逆存在且在精确解附近有界，这使得代数方程在局部有唯一解 $z = G(y)$，从而 DAE 可以转化为普通微分方程系统。\n",
                "* **网络结构** ：与标准的 PINN 类似，DAE-PINNs 通常由输入层、多个隐藏层和输出层组成。输入层接收时间和动态状态等信息，隐藏层通过非线性激活函数进行特征提取和转换，输出层预测代数变量的值。\n",
                "* **损失函数** ：损失函数由两部分组成，一部分是数据损失，用于拟合初始条件、边界条件等已知点的数据；另一部分是物理损失，即 DAE 的残差损失，通过自动微分计算网络输出对时间和状态变量的导数，代入 DAE 方程得到残差，并将其作为物理损失的一部分。通过优化这两个部分的损失函数，使得网络既能拟合数据，又能满足物理方程。\n",
                "\n",
                "<p align = \"center\">\n",
                "<img src=\"images/model.png\" height=\"300\" />\n",
                "</p>\n",
                "\n",
                "[DAE-PINN](https://arxiv.org/abs/2109.04304)的网络结构如上图。\n",
                "\n",
                "与传统神经网络不同，DAE-PINN 在网络结构中融入了物理信息，通过构造特定的损失函数，使网络在学习过程中不仅拟合数据，还能满足物理定律所描述的 DAE 方程，从而提高了模型的准确性和泛化能力。整体的网络架构如上，分为两个网络分别处理动态状态和代数状态，网络的输入是包括时间信息和动态状态信息，网络的输出是对动态状态和代数状态的预测值，具体来说，会输出动态状态 y 和代数变量 z 的预测结果，如在电力网络案例中，输出动态状态和代数变量的预测以实现对电力网络动态行为的模拟。网络支持使用`fnn`、`attention`、`conv1d`3种backbone。`fnn`为多层感知机网络，`attention`为采用类似transformer attention形式的FFN网络，`conv1d`为使用了`Conv1D`的FFN网络。\n",
                "\n",
                "## 准备环节\n",
                "\n",
                "实践前，确保已经正确安装最新版本的MindSpore与mindscience。如果没有，可以通过：\n",
                "\n",
                "* [MindSpore安装页面](https://www.mindspore.cn/install) 安装MindSpore。\n",
                "* [mindscience安装页面](https://gitee.com/mindspore/mindscience) 安装mindscience。\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## DAE-PINN实现\n",
                "\n",
                "DAE-PINN实现分为以下5个步骤：\n",
                "\n",
                "1. 配置网络与训练参数\n",
                "2. 数据集制作与加载\n",
                "3. 模型构建\n",
                "4. 模型训练\n",
                "5. 结果可视化\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "import os\n",
                "import time\n",
                "\n",
                "from mindspore import ops, jit, Tensor\n",
                "from mindspore import context\n",
                "from mindspore.experimental import optim\n",
                "import numpy as np\n",
                "\n",
                "from mindscience.utils import load_yaml_config\n",
                "\n",
                "from src.utils import DotDict\n",
                "from src.model import ThreeBusPN\n",
                "from src.data import get_dataset\n",
                "from src.trainer import DaeTrainer\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "context.set_context(mode=context.PYNATIVE_MODE, device_target='Ascend', device_id=1)"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 配置网络与训练参数\n",
                "\n",
                "从配置文件中读取模型相关参数（model）、数据相关参数（data）、优化器相关参数（optimizer），设置模型初始化、特征维度、子网类型等参数。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "config = load_yaml_config('./configs/config.yaml')\n",
                "model_params, data_params, optim_params, ode_params, summary_params = config[\n",
                "    'model'], config['data'], config['optimizer'], config['ode'], config['summary']\n",
                "\n",
                "dynamic = DotDict()\n",
                "dynamic.num_irk_stages = model_params['num_irk_stages']\n",
                "dynamic.state_dim = 4\n",
                "dynamic.activation = model_params['dyn_activation']\n",
                "dynamic.initializer = \"Glorot normal\"\n",
                "dynamic.dropout_rate = 0\n",
                "dynamic.batch_normalization = None if model_params['dyn_bn'] == \"no-bn\" else model_params['dyn_bn']\n",
                "dynamic.layer_normalization = None if model_params['dyn_ln'] == \"no-ln\" else model_params['dyn_ln']\n",
                "dynamic.type = model_params['dyn_type']\n",
                "\n",
                "if model_params['unstacked']:\n",
                "    dim_out = dynamic.state_dim * (dynamic.num_irk_stages + 1)\n",
                "else:\n",
                "    dim_out = dynamic.num_irk_stages + 1\n",
                "\n",
                "if model_params['use_input_layer']:\n",
                "    dynamic.layer_size = [dynamic.state_dim * 5] + \\\n",
                "        [model_params['dyn_width']] * model_params['dyn_depth'] + [dim_out]\n",
                "else:\n",
                "    dynamic.layer_size = [dynamic.state_dim] + \\\n",
                "        [model_params['dyn_width']] * model_params['dyn_depth'] + [dim_out]\n",
                "\n",
                "algebraic = DotDict()\n",
                "algebraic.num_irk_stages = model_params['num_irk_stages']\n",
                "dim_out_alg = algebraic.num_irk_stages + 1\n",
                "algebraic.layer_size = [dynamic.state_dim] + \\\n",
                "    [model_params['alg_width']] * model_params['alg_depth'] + [dim_out_alg]\n",
                "algebraic.activation = model_params['alg_activation']\n",
                "algebraic.initializer = \"Glorot normal\"\n",
                "algebraic.dropout_rate = 0\n",
                "algebraic.batch_normalization = None if model_params['alg_bn'] == \"no-bn\" else model_params['alg_bn']\n",
                "algebraic.layer_normalization = None if model_params['alg_ln'] == \"no-ln\" else model_params['alg_ln']\n",
                "algebraic.type = model_params['alg_type']"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 数据集制作与加载\n",
                "\n",
                "数据集下载地址：\n",
                "\n",
                "该文件包含6000条HyperCube数据集。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "train_dataset, test_dataset, val_dataset = get_dataset(data_params)"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 模型构建\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "\n",
                "net = ThreeBusPN(\n",
                "    dynamic,\n",
                "    algebraic,\n",
                "    use_input_layer=model_params['use_input_layer'],\n",
                "    stacked=not model_params['unstacked'],\n",
                ")"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 损失函数与优化器\n",
                "\n",
                "损失函数采用了MSE函数。优化器选用了Adam，学习率调度采用了ReduceLROnPlateau。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "num_irk_stages = model_params['num_irk_stages']\n",
                "# collecting RK data\n",
                "data_dir = data_params['data_dir']\n",
                "irk_data = np.loadtxt(os.path.join(data_dir, 'IRK_weights', f'Butcher_IRK{num_irk_stages}.txt'), ndmin=2, dtype=np.float32)\n",
                "irk_weights = np.reshape(\n",
                "    irk_data[0:num_irk_stages**2+num_irk_stages], (num_irk_stages+1, num_irk_stages))\n",
                "irk_weights = Tensor(irk_weights)\n",
                "irk_times = irk_data[num_irk_stages**2 + num_irk_stages:]\n",
                "trainer = DaeTrainer(net, irk_weights=irk_weights, irk_times=irk_times, h=ode_params['h'],\n",
                "                     dyn_weight=model_params['dyn_weight'], alg_weight=model_params['alg_weight'])\n",
                "\n",
                "optimizer = optim.Adam(net.trainable_params(), lr=float(optim_params['lr']))\n",
                "scheduler_type = optim_params['scheduler_type']\n",
                "use_scheduler = optim_params['use_scheduler']\n",
                "if use_scheduler:\n",
                "    if scheduler_type == \"plateau\":\n",
                "        scheduler = optim.lr_scheduler.ReduceLROnPlateau(\n",
                "            optimizer,\n",
                "            mode='min',\n",
                "            patience=optim_params['patience'],\n",
                "            factor=optim_params['factor'],\n",
                "        )\n",
                "    elif scheduler_type == \"step\":\n",
                "        scheduler = optim.lr_scheduler.StepLR(\n",
                "            optimizer, step_size=optim_params['patience'], gamma=optim_params['factor']\n",
                "        )\n",
                "    else:\n",
                "        scheduler = None\n",
                "else:\n",
                "    scheduler = None\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 训练函数\n",
                "\n",
                "使用**MindSpore>= 2.5.0**的版本，可以使用函数式编程范式训练神经网络，单步训练函数使用jit装饰。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "\n",
                "def forward_fn(x):\n",
                "    loss = trainer.get_loss(x)\n",
                "    return loss\n",
                "\n",
                "grad_fn = ops.value_and_grad(\n",
                "    forward_fn, None, optimizer.parameters, has_aux=False)\n",
                "\n",
                "@jit\n",
                "def train_step(x):\n",
                "    loss, grads = grad_fn(x)\n",
                "    optimizer(grads)\n",
                "    return loss"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 模型训练\n",
                "\n",
                "模型训练过程中边训练边推理。用户可以直接加载测试数据集，每训练n个epoch后输出一次测试集上的推理精度。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "test_interval = summary_params['test_interval']\n",
                "\n",
                "for epoch in range(1, 1 + optim_params['epochs']):\n",
                "    # train\n",
                "    time_beg = time.time()\n",
                "    net.set_train(True)\n",
                "    loss_val = []\n",
                "    for data, in train_dataset:\n",
                "        step_train_loss = train_step(data)\n",
                "        loss_val.append(step_train_loss.numpy())\n",
                "    time_end = time.time()\n",
                "\n",
                "    print(\n",
                "        f\"epoch: {epoch} train loss: {np.mean(loss_val)} epoch time: {time_end-time_beg:.3f}s\")\n",
                "    if use_scheduler:\n",
                "        if scheduler_type == \"plateau\":\n",
                "            net.set_train(False)\n",
                "            loss_val = trainer.get_loss(val_dataset)\n",
                "            scheduler.step(loss_val)\n",
                "        else:\n",
                "            scheduler.step()\n",
                "    # test\n",
                "    if epoch % test_interval == 0:\n",
                "        net.set_train(False)\n",
                "        loss_test = trainer.get_loss(test_dataset)\n",
                "        print(f'test loss: {loss_test}')\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 结果可视化\n",
                "\n",
                "模型训练的loss下降曲线如下：\n",
                "<p align = \"center\">\n",
                "<img src=\"images/loss.png\" height=\"400\" />\n",
                "</p>\n",
                "\n",
                "可以看到训练3W轮之后，训练和测试loss均下降到5e-3左右。\n",
                "\n",
                "网络对于4个动态和1个代数变量预测的L2相对损失如下图:\n",
                "<p align = \"center\">\n",
                "<img src=\"images/L2relative_error_0.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_1.png\" height=\"200\" />\n",
                "</p>\n",
                "<p align = \"center\">\n",
                "<img src=\"images/L2relative_error_2.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_3.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_4.png\" height=\"200\" />\n",
                "</p>\n"
            ]
        }
    ],
    "metadata": {
        "kernelspec": {
            "display_name": "Python 3.9.16 64-bit ('gbq_2.0': conda)",
            "language": "python",
            "name": "python3"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.9.16"
        },
        "vscode": {
            "interpreter": {
                "hash": "b9063439a3781aed32d6b0dd4804a0c8b51ecec7893a0f31b99846bc91ef39eb"
            }
        }
    },
    "nbformat": 4,
    "nbformat_minor": 0
}
