{
    "cells": [
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "# DAE-PINN\n",
                "\n",
                "## Environment Setting\n",
                "\n",
                "This example requires **MindSpore >= 2.5.0** to call the following interfaces: *mindspore.jit, mindspore.jit_class, mindspore.data_sink*. For details, please refer to [MindSpore Installation](https://www.mindspore.cn/install).\n",
                "\n",
                "In addition, **MindScience >=0.1.0** is required. If it is not installed in your current environment, please install it according to the following steps, selecting the backend and version.\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "mindscience_version = \"0.1.0\"  # update if needed\n",
                "# Comment out the following code if you are using NPU.\n",
                "!pip uninstall -y mindscience-ascend\n",
                "!pip install mindscience-ascend==$mindscience_version\n",
                "\n",
                "# NPU Uncomment if needed.\n",
                "# !pip uninstall -y mindscience-ascend\n",
                "# !pip install mindscience-ascend==$mindscience_version"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Overview\n",
                "\n",
                "* **Dynamic Security Assessment Needs of Power Networks**: With the integration of distributed energy resources into power networks, market liberalization, and the adoption of complex communication and control algorithms, the operating conditions and potential fault scenarios of power networks are becoming more diverse, affecting their security. To evaluate the dynamic security of power networks, it is necessary to simulate their dynamic response when facing a single fault. This requires solving a set of nonlinear differential-algebraic equations (DAEs). Traditional explicit integration schemes fail to solve DAEs, and commercial solvers are computationally expensive and memory-intensive, limiting the online deployment of dynamic security assessment.\n",
                "* **The Potential and Challenges of Deep Learning in Scientific and Engineering Fields**: Despite the significant success of deep learning in computer vision and natural language processing, its application in learning scientific and engineering dynamic systems is limited. This is due to the high cost of data collection and the lack of robustness and generalization ability of most traditional deep learning methods when data is limited.\n",
                "\n",
                "## How It Works\n",
                "\n",
                "* The DAE-PINNs framework combines an implicit Runge-Kutta time-stepping scheme (designed specifically for solving DAEs) with physics-informed neural networks (PINNs). During the time-stepping process, assuming integration has reached $(t_n, y_n, z_n)$, the goal is to advance to $(t_{n+1}, y_{n+1}, z_{n+1})$. After applying the implicit Runge-Kutta scheme, a series of equations are obtained, including update formulas for internal stages and final states.\n",
                "* The neural network is constrained to satisfy the DAE through a penalty method. During training, the residual of the DAE is used as part of the loss function. This enables the network to not only fit the data but also satisfy the DAE equations described by physical laws during the learning process, thereby incorporating physical information into the learning process of the neural network.\n",
                "\n",
                "## Method Details\n",
                "\n",
                "* **Problem Setup**: The DAE is given in semi-explicit form, including dynamic states y and algebraic variables z, as well as f describing the differential equations and g describing the algebraic equations. It is assumed that f and g are sufficiently differentiable and that the DAE has an index of 1, i.e., the Jacobian matrix g_z is invertible and bounded near the exact solution. This ensures that the algebraic equations have a unique local solution $z = G(y)$, allowing the DAE to be transformed into a system of ordinary differential equations.\n",
                "* **Network Structure**: Similar to standard PINNs, DAE-PINNs typically consist of an input layer, multiple hidden layers, and an output layer. The input layer receives information such as time and dynamic states, the hidden layers extract and transform features through nonlinear activation functions, and the output layer predicts the values of algebraic variables.\n",
                "* **Loss Function**: The loss function consists of two parts. One part is the data loss, used to fit data from known points such as initial and boundary conditions. The other part is the physics loss, which is the residual loss of the DAE. By automatically differentiating the network's outputs with respect to time and state variables, the residual of the DAE equation is obtained and used as part of the physics loss. Optimizing these two parts of the loss function enables the network to both fit the data and satisfy the physical equations.\n",
                "\n",
                "<p align = \"center\">\n",
                "<img src=\"images/model.png\" height=\"300\" />\n",
                "</p>\n",
                "\n",
                "As shown in the figure above, the network structure of DAE-PINN differs from traditional neural networks in that it incorporates physical information. By constructing a specific loss function, the network not only fits the data during the learning process but also satisfies the DAE equations described by physical laws. This enhances the accuracy and generalization ability of the model. The overall network architecture includes two networks that separately process dynamic states and algebraic states. The network inputs include time information and dynamic state information, and the outputs are predictions of dynamic states and algebraic variables. For example, in the case of a power network, the network outputs predictions of dynamic states and algebraic variables to simulate the dynamic behavior of the power network. The network supports three types of backbones: `fnn`, `attention`, and `conv1d`. `fnn` is a multi-layer perceptron network, `attention` is a FFN network adopting a transformer-like attention mechanism, and `conv1d` is a FFN network utilizing `Conv1D`.\n",
                "\n",
                "## Preparation\n",
                "\n",
                "Before practice, ensure that the latest versions of MindSpore and mindscience are correctly installed. If not, you can install them through:\n",
                "\n",
                "* [MindSpore Installation Page](https://www.mindspore.cn/install)\n",
                "* [mindscience Installation Page](https://gitee.com/mindspore/mindscience)\n",
                "\n",
                "## DAE-PINN Implementation\n",
                "\n",
                "The implementation of DAE-PINN is divided into the following 5 steps:\n",
                "\n",
                "  1. Configure network and training parameters\n",
                "  2. Dataset creation and loading\n",
                "  3. Model construction\n",
                "  4. Model training\n",
                "  5. Result visualization\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "import os\n",
                "import time\n",
                "\n",
                "from mindspore import ops, jit, Tensor\n",
                "from mindspore import context\n",
                "from mindspore.experimental import optim\n",
                "import numpy as np\n",
                "\n",
                "from mindscience.utils import load_yaml_config\n",
                "\n",
                "from src.utils import DotDict\n",
                "from src.model import ThreeBusPN\n",
                "from src.data import get_dataset\n",
                "from src.trainer import DaeTrainer\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "context.set_context(mode=context.PYNATIVE_MODE, device_target='Ascend', device_id=1)\n",
                "config = load_yaml_config('./configs/config.yaml')\n",
                "model_params, data_params, optim_params, ode_params, summary_params = config[\n",
                "    'model'], config['data'], config['optimizer'], config['ode'], config['summary']\n",
                "\n",
                "dynamic = DotDict()\n",
                "dynamic.num_irk_stages = model_params['num_irk_stages']\n",
                "dynamic.state_dim = 4\n",
                "dynamic.activation = model_params['dyn_activation']\n",
                "dynamic.initializer = \"Glorot normal\"\n",
                "dynamic.dropout_rate = 0\n",
                "dynamic.batch_normalization = None if model_params['dyn_bn'] == \"no-bn\" else model_params['dyn_bn']\n",
                "dynamic.layer_normalization = None if model_params['dyn_ln'] == \"no-ln\" else model_params['dyn_ln']\n",
                "dynamic.type = model_params['dyn_type']\n",
                "\n",
                "if model_params['unstacked']:\n",
                "    dim_out = dynamic.state_dim * (dynamic.num_irk_stages + 1)\n",
                "else:\n",
                "    dim_out = dynamic.num_irk_stages + 1\n",
                "\n",
                "if model_params['use_input_layer']:\n",
                "    dynamic.layer_size = [dynamic.state_dim * 5] + \\\n",
                "        [model_params['dyn_width']] * model_params['dyn_depth'] + [dim_out]\n",
                "else:\n",
                "    dynamic.layer_size = [dynamic.state_dim] + \\\n",
                "        [model_params['dyn_width']] * model_params['dyn_depth'] + [dim_out]\n",
                "\n",
                "algebraic = DotDict()\n",
                "algebraic.num_irk_stages = model_params['num_irk_stages']\n",
                "dim_out_alg = algebraic.num_irk_stages + 1\n",
                "algebraic.layer_size = [dynamic.state_dim] + \\\n",
                "    [model_params['alg_width']] * model_params['alg_depth'] + [dim_out_alg]\n",
                "algebraic.activation = model_params['alg_activation']\n",
                "algebraic.initializer = \"Glorot normal\"\n",
                "algebraic.dropout_rate = 0\n",
                "algebraic.batch_normalization = None if model_params['alg_bn'] == \"no-bn\" else model_params['alg_bn']\n",
                "algebraic.layer_normalization = None if model_params['alg_ln'] == \"no-ln\" else model_params['alg_ln']\n",
                "algebraic.type = model_params['alg_type']"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## Dataset Creation and Loading\n",
                "\n",
                "Dataset download link:\n",
                "This file contains 6000 samples of the HyperCube dataset.\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "train_dataset, test_dataset, val_dataset = get_dataset(data_params)"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Model construction\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "\n",
                "net = ThreeBusPN(\n",
                "    dynamic,\n",
                "    algebraic,\n",
                "    use_input_layer=model_params['use_input_layer'],\n",
                "    stacked=not model_params['unstacked'],\n",
                ")"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Loss Function and Optimizer\n",
                "\n",
                "The loss function uses the MSE function. The optimizer selects Adam, and the learning rate scheduling uses ReduceLROnPlateau.\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "num_irk_stages = model_params['num_irk_stages']\n",
                "# collecting RK data\n",
                "data_dir = data_params['data_dir']\n",
                "irk_data = np.loadtxt(os.path.join(data_dir, 'IRK_weights', f'Butcher_IRK{num_irk_stages}.txt'), ndmin=2, dtype=np.float32)\n",
                "irk_weights = np.reshape(irk_data[0:num_irk_stages**2+num_irk_stages], (num_irk_stages+1, num_irk_stages))\n",
                "irk_weights = Tensor(irk_weights)\n",
                "irk_times = irk_data[num_irk_stages**2 + num_irk_stages:]\n",
                "trainer = DaeTrainer(net, irk_weights=irk_weights, irk_times=irk_times, h=ode_params['h'],\n",
                "                     dyn_weight=model_params['dyn_weight'], alg_weight=model_params['alg_weight'])\n",
                "\n",
                "optimizer = optim.Adam(net.trainable_params(), lr=float(optim_params['lr']))\n",
                "scheduler_type = optim_params['scheduler_type']\n",
                "use_scheduler = optim_params['use_scheduler']\n",
                "if use_scheduler:\n",
                "    if scheduler_type == \"plateau\":\n",
                "        scheduler = optim.lr_scheduler.ReduceLROnPlateau(\n",
                "            optimizer,\n",
                "            mode='min',\n",
                "            patience=optim_params['patience'],\n",
                "            factor=optim_params['factor'],\n",
                "        )\n",
                "    elif scheduler_type == \"step\":\n",
                "        scheduler = optim.lr_scheduler.StepLR(\n",
                "            optimizer, step_size=optim_params['patience'], gamma=optim_params['factor']\n",
                "        )\n",
                "    else:\n",
                "        scheduler = None\n",
                "else:\n",
                "    scheduler = None\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Training Function\n",
                "\n",
                "Using MindSpore >= 2.5.0, you can train neural networks in a functional programming paradigm. The single-step training function is decorated with jit.\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {},
            "outputs": [],
            "source": [
                "\n",
                "def forward_fn(x):\n",
                "    loss = trainer.get_loss(x)\n",
                "    return loss\n",
                "\n",
                "grad_fn = ops.value_and_grad(\n",
                "    forward_fn, None, optimizer.parameters, has_aux=False)\n",
                "\n",
                "@jit\n",
                "def train_step(x):\n",
                "    loss, grads = grad_fn(x)\n",
                "    optimizer(grads)\n",
                "    return loss"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Model Training\n",
                "\n",
                "During model training, training and inference are performed simultaneously. Users can directly load the test dataset and output the inference accuracy on the test set after every n epochs.\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": null,
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "test_interval = summary_params['test_interval']\n",
                "\n",
                "for epoch in range(1, 1 + optim_params['epochs']):\n",
                "    # train\n",
                "    time_beg = time.time()\n",
                "    net.set_train(True)\n",
                "    loss_val = []\n",
                "    for data, in train_dataset:\n",
                "        step_train_loss = train_step(data)\n",
                "        loss_val.append(step_train_loss.numpy())\n",
                "    time_end = time.time()\n",
                "\n",
                "    print(\n",
                "        f\"epoch: {epoch} train loss: {np.mean(loss_val)} epoch time: {time_end-time_beg:.3f}s\")\n",
                "    if use_scheduler:\n",
                "        if scheduler_type == \"plateau\":\n",
                "            net.set_train(False)\n",
                "            loss_val = trainer.get_loss(val_dataset)\n",
                "            scheduler.step(loss_val)\n",
                "        else:\n",
                "            scheduler.step()\n",
                "    # test\n",
                "    if epoch % test_interval == 0:\n",
                "        net.set_train(False)\n",
                "        loss_test = trainer.get_loss(test_dataset)\n",
                "        print(f'test loss: {loss_test}')\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "collapsed": false,
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## Result Visualization\n",
                "\n",
                "The loss descent curve during model training is as follows:\n",
                "\n",
                "<p align = \"center\">\n",
                "<img src=\"images/loss.png\" height=\"400\" />\n",
                "</p>\n",
                "\n",
                "It can be observed that after 30,000 training iterations, both the training and test losses have decreased to around 5e-3.\n",
                "\n",
                "The L2 relative loss predicted by the network for 4 dynamic and 1 algebraic variable is shown in the following figure:\n",
                "<p align = \"center\">\n",
                "<img src=\"images/L2relative_error_0.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_1.png\" height=\"200\" />\n",
                "</p>\n",
                "<p align = \"center\">\n",
                "<img src=\"images/L2relative_error_2.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_3.png\" height=\"200\" />\n",
                "<img src=\"images/L2relative_error_4.png\" height=\"200\" />\n",
                "</p>\n"
            ]
        }
    ],
    "metadata": {
        "kernelspec": {
            "display_name": "Python 3.9.16 64-bit ('gbq_2.0': conda)",
            "language": "python",
            "name": "python3"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.9.16"
        },
        "vscode": {
            "interpreter": {
                "hash": "b9063439a3781aed32d6b0dd4804a0c8b51ecec7893a0f31b99846bc91ef39eb"
            }
        }
    },
    "nbformat": 4,
    "nbformat_minor": 0
}
