{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Training a RL Agent with Stable-Baselines3 Using a GEM Environment\n",
    "\n",
    "This notebook serves as an educational introduction to the usage of Stable-Baselines3 using a gym-electric-motor (GEM) environment. The goal of this notebook is to give an understanding of what Stable-Baselines3 is and how to use it to train and evaluate a reinforcement learning agent that can solve a current control problem of the GEM toolbox."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Installation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before you can start you need to make sure that you have both gym-electric-motor and Stable-Baselines3 installed. You can install both easily using pip:\n",
    "\n",
    "- ```pip install gym-electric-motor```\n",
    "- ```pip install stable-baselines3```\n",
    "\n",
    "Alternatively, you can install them and their latest developer version directly from GitHub:\n",
    "\n",
    "- https://github.com/upb-lea/gym-electric-motor\n",
    "- https://github.com/DLR-RM/stable-baselines3\n",
    "\n",
    "For this notebook, the following cell will do the job:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Setting up a GEM Environment\n",
    "\n",
    "The basic idea behind reinforcement learning is to create a so-called agent, that should learn by itself to solve a specified task in a given environment. \n",
    "This environment gives the agent feedback on its actions and reinforces the targeted behavior.\n",
    "In this notebook, the task is to train a controller for the current control of a *permanent magnet synchronous motor* (*PMSM*).\n",
    " \n",
    "In the following, the used GEM-environment is briefly presented, but this notebook does not focus directly on the detailed usage of GEM. If you are new to the used environment and interested in finding out what it does and how to use it, you should take a look at the [GEM cookbook](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/example_notebooks/GEM_cookbook.ipynb).\n",
    "\n",
    "The basic idea of the control setup from the GEM-environment is displayed in the following figure. \n",
    "\n",
    "![](../../docs/plots/SCML_Overview.png)\n",
    "\n",
    "The agent controls the converter who converts the supply currents to the currents flowing into the motor - for the *PMSM*: $i_{sq}$ and $i_{sd}$\n",
    "\n",
    "In the continuous case, the agent's action equals a duty cycle which will be modulated into a corresponding voltage. \n",
    "\n",
    "In the discrete case, the agent's actions denote switching states of the converter at the given instant. Here, only a discrete amount of options are available. In this notebook, for the PMSM the *discrete B6 bridge converter* with six switches is utilized per default. This converter provides a total of eight possible actions.\n",
    "\n",
    "![](../../docs/plots/B6.svg)\n",
    "\n",
    "The motor schematic is the following:\n",
    "\n",
    "\n",
    "![](../../docs/plots/ESBdq.svg)\n",
    "\n",
    "And the electrical ODEs for that motor are:\n",
    "\n",
    "<h3 align=\"center\">\n",
    "\n",
    "<!-- $\\frac{\\mathrm{d}i_{sq}}{\\mathrm{d}t} = \\frac{u_{sq}-pL_d\\omega_{me}i_{sd}-R_si_{sq}}{L_q}$\n",
    "\n",
    "$\\frac{\\mathrm{d}i_{sd}}{\\mathrm{d}t} = \\frac{u_{sd}-pL_q\\omega_{me}i_{sq}-R_si_{sd}}{L_d}$\n",
    "\n",
    "$\\frac{\\mathrm{d}\\epsilon_{el}}{\\mathrm{d}t} = p\\omega_{me}$\n",
    " -->\n",
    "\n",
    "   $ \\frac{\\mathrm{d}i_{sd}}{\\mathrm{d}t}=\\frac{u_{sd} + p\\omega_{me}L_q i_{sq} - R_s i_{sd}}{L_d} $ <br><br>\n",
    "    $\\frac{\\mathrm{d} i_{sq}}{\\mathrm{d} t}=\\frac{u_{sq} - p \\omega_{me} (L_d i_{sd} + \\mathit{\\Psi}_p) - R_s i_{sq}}{L_q}$ <br><br>\n",
    "   $\\frac{\\mathrm{d}\\epsilon_{el}}{\\mathrm{d}t} = p\\omega_{me}$\n",
    "\n",
    "</h3>\n",
    "\n",
    "The target for the agent is now to learn to control the currents. For this, a reference generator produces a trajectory that the agent has to follow. \n",
    "Therefore, it has to learn a function (policy) from given states, references and rewards to appropriate actions.\n",
    "\n",
    "For a deeper understanding of the used models behind the environment see the [documentation](https://upb-lea.github.io/gym-electric-motor/).\n",
    "Comprehensive learning material to RL is also [freely available](https://github.com/upb-lea/reinforcement_learning_course_materials)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "from typing import Tuple, Optional, List, Type\n",
    "\n",
    "import gymnasium as gym\n",
    "import numpy as np\n",
    "import torch as th\n",
    "import torch.nn as nn\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from gymnasium import ObservationWrapper\n",
    "from gymnasium.wrappers import TimeLimit\n",
    "from gymnasium.spaces import Box\n",
    "\n",
    "from stable_baselines3 import DDPG, TD3\n",
    "from stable_baselines3.common.policies import BaseModel\n",
    "from stable_baselines3.common.preprocessing import get_action_dim\n",
    "from stable_baselines3.common.torch_layers import BaseFeaturesExtractor\n",
    "from stable_baselines3.common.utils import update_learning_rate\n",
    "from stable_baselines3.td3.policies import Actor, TD3Policy\n",
    "\n",
    "from gym_electric_motor import gym_electric_motor as gem\n",
    "from gym_electric_motor.physical_systems.mechanical_loads import ConstantSpeedLoad\n",
    "from gym_electric_motor.physical_system_wrappers import CosSinProcessor, DeadTimeProcessor, DqToAbcActionProcessor\n",
    "from gym_electric_motor.envs.motors import ActionType, ControlType, Motor, MotorType\n",
    "from gym_electric_motor.physical_systems.solvers import EulerSolver\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FeatureWrapper(ObservationWrapper):\n",
    "    \"\"\"\n",
    "    Wrapper class which wraps the environment to change its observation from a tuple to a flat vector.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, env):\n",
    "        \"\"\"\n",
    "        Changes the observation space from a tuple to a flat vector\n",
    "        \n",
    "        Args:\n",
    "            env(GEM env): GEM environment to wrap\n",
    "        \"\"\"\n",
    "        super(FeatureWrapper, self).__init__(env)\n",
    "        state_space = self.env.observation_space[0]\n",
    "        ref_space = self.env.observation_space[1]\n",
    "        \n",
    "        new_low = np.concatenate((state_space.low,\n",
    "                                  ref_space.low))\n",
    "        new_high = np.concatenate((state_space.high,\n",
    "                                   ref_space.high))\n",
    "\n",
    "        self.observation_space = Box(new_low, new_high)\n",
    "\n",
    "    def observation(self, observation):\n",
    "        \"\"\"\n",
    "        Gets called at each return of an observation.\n",
    "        \n",
    "        \"\"\"\n",
    "        observation = np.concatenate((observation[0],\n",
    "                                      observation[1],\n",
    "                                      ))\n",
    "        return observation\n",
    "    \n",
    "class LastActionWrapper(gym.Wrapper):\n",
    "    def __init__(self, env):\n",
    "        super(LastActionWrapper, self).__init__(env)\n",
    "        state_space = self.env.observation_space\n",
    "        action_space = self.env.action_space\n",
    "        \n",
    "        new_low = np.concatenate((state_space.low,\n",
    "                                  action_space.low))\n",
    "        new_high = np.concatenate((state_space.high,\n",
    "                                   action_space.high))\n",
    "\n",
    "        self.observation_space = Box(new_low, new_high)\n",
    "\n",
    "    def reset(self, **kwargs):\n",
    "        observation, info = self.env.reset(**kwargs)\n",
    "        self.last_action = np.zeros(self.action_space.shape[0], dtype=np.float32)\n",
    "        return np.concatenate((observation, self.last_action)), info\n",
    "\n",
    "    def step(self, action):\n",
    "        observation, reward, terminated, truncated, info = self.env.step(action)\n",
    "        self.last_action = action\n",
    "        return np.concatenate((observation, self.last_action)), reward, terminated, truncated, info\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the environment type\n",
    "motor = Motor(MotorType.PermanentMagnetSynchronousMotor,\n",
    "              ControlType.CurrentControl,\n",
    "              ActionType.Continuous)\n",
    "tau = 1e-4    # The duration of each sampling step\n",
    "\n",
    "# motor type: Brusa HSM16.17.12-C01\n",
    "motor_parameter = dict(\n",
    "    p=3,                # [p] = 1, nb of pole pairs\n",
    "    r_s=17.932e-3,      # [r_s] = Ohm, stator resistance\n",
    "    l_d=0.37e-3,        # [l_d] = H, d-axis inductance\n",
    "    l_q=1.2e-3,         # [l_q] = H, q-axis inductance\n",
    "    psi_p=65.65e-3,     # [psi_p] = Vs, magnetic flux of the permanent magnet\n",
    ")  \n",
    "\n",
    "nominal_values=dict(\n",
    "    omega=6000*2*np.pi/60,  # angular velocity in rpm\n",
    "    i=240,                  # motor current in amps\n",
    "    u=350,                  # nominal voltage in volts\n",
    ")\n",
    "\n",
    "limit_values = dict(\n",
    "    omega = 6000*2*np.pi/60,\n",
    "    i = 240*1.2,\n",
    "    u = 350,\n",
    ")\n",
    "\n",
    "pmsm_init = {\n",
    "    'states': {\n",
    "        'i_sd': 0.,\n",
    "        'i_sq': 0.,\n",
    "        'epsilon': 0.,\n",
    "                }\n",
    "}\n",
    "\n",
    "physical_system_wrappers = [\n",
    "    # Wrapped directly around the physical system\n",
    "    CosSinProcessor(angle='epsilon'),\n",
    "    DqToAbcActionProcessor.make('PMSM'),\n",
    "    DeadTimeProcessor(steps=1) # Only use DeadTimeWrapper after you have implemented a last action concatinator for the state\n",
    "    # Wrapped around the CosSinProcessor. Therefore, the generated states (cos and sin) can be accessed.\n",
    "]\n",
    "\n",
    "# define the random initialisation for load and motor\n",
    "load_init = {'states': {'omega': 4000*2*np.pi/60 * 0.2}}\n",
    "#load_init={'random_init': 'uniform', }\n",
    "load = ConstantSpeedLoad(\n",
    "    omega_fixed=4000*2*np.pi/60 * 0.2 #load_init \n",
    ")\n",
    "\n",
    "env = gem.make(  \n",
    "    motor.env_id(),    \n",
    "    # parameterize the PMSM and update limitations\n",
    "    motor=dict(\n",
    "        motor_parameter=motor_parameter,\n",
    "        limit_values=limit_values,\n",
    "        nominal_values=nominal_values,\n",
    "        motor_initializer=pmsm_init,\n",
    "    ),   \n",
    "    load=load,\n",
    "    tau=tau,\n",
    "    ode_solver=EulerSolver(),#'euler',\n",
    "    physical_system_wrappers=physical_system_wrappers, # Pass the Physical System Wrappers\n",
    "    state_filter=[ \"i_sd\", \"i_sq\",\"omega\",\"epsilon\", \"sin(epsilon)\", \"cos(epsilon)\"],\n",
    "    supply=dict(u_nominal=350),   \n",
    ")\n",
    "\n",
    "eps_idx = env.physical_system.state_names.index('epsilon')\n",
    "i_sd_idx = env.physical_system.state_names.index('i_sd')\n",
    "i_sq_idx = env.physical_system.state_names.index('i_sq')\n",
    "\n",
    "env= TimeLimit(LastActionWrapper(FeatureWrapper(env)), max_episode_steps=200)\n",
    "\n",
    "print(env.action_space.sample())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Training an Agent with Stable-Baselines3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Stable-Baselines3 collects Reinforcement Learning algorithms implemented in Pytorch. \n",
    "\n",
    "Stable-Baselines3 is still a very new library with its current release being 0.9. That is why its collection of algorithms is not very large yet and most algorithms lack more advanced variants. However, its authors planned to broaden the available algorithms in the future. For currently available algorithms see their [documentation](https://stable-baselines3.readthedocs.io/en/master/guide/rl.html).\n",
    "\n",
    "To use an agent provided by Stable-Baselines3 your environment has to have a [gym interface](https://stable-baselines3.readthedocs.io/en/master/guide/custom_env.html). "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 Imports"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The environment in this control problem poses a continuous action space. Therefore, the [Deep Deterministic Policy Gradient (DDPG)](https://arxiv.org/abs/1509.02971) is a suitable agent.\n",
    "For the specific implementation of the DDPG you can refer to [Stable-Baslines3's docs](https://stable-baselines3.readthedocs.io/en/master/modules/ddpg.html).\n",
    "\n",
    "In this tutorial two customized multi-layer perceptron (MLP) are used as critic and actor. This allows different learning rates for actor and critic"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomDDPG(DDPG):\n",
    "    def __init__(self, policy, env, *args, actor_lr=1e-5, critic_lr=1e-4, **kwargs):\n",
    "    #def __init__(self, policy, env, *args, actor_lr=1e-5, critic_lr=1e-4, **kwargs):\n",
    "        super().__init__(policy, env, *args, **kwargs)\n",
    "        self.actor_lr = actor_lr\n",
    "        self.critic_lr = critic_lr\n",
    "\n",
    "    def _update_learning_rate(self, optimizers):\n",
    "        \"\"\"\n",
    "                Costum function to update actor and critic with different learning rates.\n",
    "                Based on https://github.com/DLR-RM/stable-baselines3/issues/338\n",
    "                \"\"\"\n",
    "        actor_optimizer, critic_optimizer = optimizers\n",
    "\n",
    "        update_learning_rate(actor_optimizer, self.actor_lr)\n",
    "        update_learning_rate(critic_optimizer, self.critic_lr)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_network(input_dim, hidden_sizes, output_dim, activations):\n",
    "    \"\"\"\n",
    "    Create a neural network with customizable layers, sizes, and activation functions.\n",
    "\n",
    "    Args:\n",
    "    - input_dim (int): The size of the input layer.\n",
    "    - hidden_sizes (list): List of integers representing the sizes of each hidden layer.\n",
    "    - output_dim (int): The size of the output layer.\n",
    "    - activations (list): List of tuples where each tuple contains the activation function\n",
    "                          name as the first element and any parameters as subsequent elements.\n",
    "                          Pass None for layers without activation.\n",
    "\n",
    "    Returns:\n",
    "    - network (nn.Sequential): The created neural network.\n",
    "    \"\"\"\n",
    "\n",
    "    layers = []\n",
    "    \n",
    "    # Input layer\n",
    "    layers.append(nn.Linear(input_dim, hidden_sizes[0]))\n",
    "    \n",
    "    # Activation function for the first hidden layer\n",
    "    if activations[0] is not None:\n",
    "        activation, *params = activations[0]\n",
    "        act_func = getattr(nn, activation)(*params)\n",
    "        layers.append(act_func)\n",
    "\n",
    "    # Hidden layers\n",
    "    for i in range(1, len(hidden_sizes)):\n",
    "        layers.append(nn.Linear(hidden_sizes[i - 1], hidden_sizes[i]))\n",
    "\n",
    "        # Activation function\n",
    "        if activations[i] is not None:\n",
    "            activation, *params = activations[i]\n",
    "            act_func = getattr(nn, activation)(*params)\n",
    "            layers.append(act_func)\n",
    "\n",
    "    # Output layer\n",
    "    layers.append(nn.Linear(hidden_sizes[-1], output_dim))\n",
    "\n",
    "    # Activation function for output layer\n",
    "    if activations[-1] is not None:\n",
    "        activation, *params = activations[-1]\n",
    "        act_func = getattr(nn, activation)(*params)\n",
    "        layers.append(act_func)\n",
    "\n",
    "    return nn.Sequential(*layers)\n",
    "\n",
    "state_dim = env.observation_space.shape[0]\n",
    "action_dim = env.action_space.shape[0]\n",
    "\n",
    "actor_activations = [('ReLU',), ('LeakyReLU', 0.2), ('Tanh',)]\n",
    "critic_activations = [('ReLU',), ('LeakyReLU', 0.1), None]\n",
    "\n",
    "actor_network = create_network(\n",
    "    input_dim = state_dim,\n",
    "    output_dim = action_dim,\n",
    "    hidden_sizes =[100, 100],\n",
    "    activations=actor_activations)\n",
    "\n",
    "critic_network = create_network(\n",
    "    input_dim=state_dim + action_dim,\n",
    "    output_dim=1,\n",
    "    hidden_sizes= [100, 100],\n",
    "    activations=critic_activations)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomActor(Actor):\n",
    "    \"\"\"\n",
    "    Actor network (policy) for TD3.\n",
    "    \"\"\"\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        super(CustomActor, self).__init__(*args, **kwargs)\n",
    "        # Define custom network with Dropout\n",
    "        # WARNING: it must end with a tanh activation to squash the output\n",
    "        #self.mu = nn.Sequential(...)\n",
    "        self.mu = actor_network\n",
    "\n",
    "class CustomContinuousCritic(BaseModel):\n",
    "    \"\"\"\n",
    "    Critic network(s) for DDPG/SAC/TD3.\n",
    "    \"\"\"\n",
    "    def __init__(\n",
    "        self,\n",
    "        observation_space: gym.spaces.Space,\n",
    "        action_space: gym.spaces.Space,\n",
    "        net_arch: List[int],\n",
    "        features_extractor: nn.Module,\n",
    "        features_dim: int,\n",
    "        activation_fn: Type[nn.Module] = nn.ReLU,\n",
    "        normalize_images: bool = True,\n",
    "        n_critics: int = 2,\n",
    "        share_features_extractor: bool = True,\n",
    "    ):\n",
    "        super().__init__(\n",
    "            observation_space,\n",
    "            action_space,\n",
    "            features_extractor=features_extractor,\n",
    "            normalize_images=normalize_images,\n",
    "        )\n",
    "\n",
    "        action_dim = get_action_dim(self.action_space)\n",
    "\n",
    "        self.share_features_extractor = share_features_extractor\n",
    "        self.n_critics = n_critics\n",
    "        self.q_networks = []\n",
    "        for idx in range(n_critics):\n",
    "            # Define critic with Dropout here\n",
    "            q_net = critic_network\n",
    "            self.add_module(f\"qf{idx}\", q_net)\n",
    "            self.q_networks.append(q_net)\n",
    "\n",
    "    def forward(self, obs: th.Tensor, actions: th.Tensor) -> Tuple[th.Tensor, ...]:\n",
    "        # Learn the features extractor using the policy loss only\n",
    "        # when the features_extractor is shared with the actor\n",
    "        with th.set_grad_enabled(not self.share_features_extractor):\n",
    "            features = self.extract_features(obs, self.features_extractor)\n",
    "        qvalue_input = th.cat([features, actions], dim=1)\n",
    "        return tuple(q_net(qvalue_input) for q_net in self.q_networks)\n",
    "\n",
    "    def q1_forward(self, obs: th.Tensor, actions: th.Tensor) -> th.Tensor:\n",
    "        \"\"\"\n",
    "        Only predict the Q-value using the first network.\n",
    "        This allows to reduce computation when all the estimates are not needed\n",
    "        (e.g. when updating the policy in TD3).\n",
    "        \"\"\"\n",
    "        with th.no_grad():\n",
    "            features = self.extract_features(obs, self.features_extractor)\n",
    "        return self.q_networks[0](th.cat([features, actions], dim=1))\n",
    "\n",
    "class CustomTD3Policy(TD3Policy):\n",
    "    def __init__(self, *args, **kwargs):\n",
    "\n",
    "        \n",
    "        super(CustomTD3Policy, self).__init__(*args, **kwargs)\n",
    "\n",
    "\n",
    "    def make_actor(self, features_extractor: Optional[BaseFeaturesExtractor] = None) -> CustomActor:\n",
    "        actor_kwargs = self._update_features_extractor(self.actor_kwargs, features_extractor)\n",
    "        return CustomActor(**actor_kwargs).to(self.device)\n",
    "\n",
    "    def make_critic(self, features_extractor: Optional[BaseFeaturesExtractor] = None) -> CustomContinuousCritic:\n",
    "        critic_kwargs = self._update_features_extractor(self.critic_kwargs, features_extractor)\n",
    "        return CustomContinuousCritic(**critic_kwargs).to(self.device)\n",
    "\n",
    "TD3.policy_aliases[\"CustomTD3Policy\"] = CustomTD3Policy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 Parameterization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the DDPG algorithm you have to define a set of parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "nb_steps = 256000 # number of training steps\n",
    "buffer_size = nb_steps #number of old observation steps saved\n",
    "learning_starts = 32 # memory warmup\n",
    "train_freq = 1 # prediction network gets an update each train_freq's step\n",
    "batch_size = 32 # mini batch size drawn at each update step\n",
    "gamma = 0.85\n",
    "verbose = 1 # verbosity of stable-basline's prints\n",
    "lr_actor = 1e-4\n",
    "lr_critic = 1e-3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Additionally, you have to define how long your agent shall train. You can just set a concrete number of steps or use knowledge of the environment's temporal resolution to define an in-simulation training time. In this example, the agent is trained for five seconds which translates in this environment's case to 500000 steps."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3 Training of the Agent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once you've setup the environment and defined your parameters starting the training is nothing more than a one-liner. For each algorithm all you have to do is call its ```learn()``` function. However, you should note that the execution of the training can take a long time. Currently, Stable-Baselines3 does not provide any means of saving the training reward for later visualization. Therefore, a ```RewardLogger``` callback is used for this environment (see code a few cells above)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "policy_kwargs = dict(optimizer_class=th.optim.Adam,)\n",
    "\n",
    "agent = CustomDDPG(\"CustomTD3Policy\", env, buffer_size=buffer_size, learning_starts=learning_starts ,train_freq=train_freq, \n",
    "            batch_size=batch_size, gamma=gamma, policy_kwargs=policy_kwargs, \n",
    "            verbose=verbose, actor_lr=lr_actor, critic_lr=lr_critic)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "\n",
    "start_time = time.time()\n",
    "agent.learn(total_timesteps=nb_steps)\n",
    "total_time = time.time() - start_time\n",
    "print(f\"Batch size 1 total time: {total_time // 60} minutes, {total_time % 60} seconds\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "obs, _ = env.reset()\n",
    "\n",
    "# Take trained agent and loop over environment\n",
    "i_ds = []\n",
    "i_qs = []\n",
    "i_ds_ref = []\n",
    "i_qs_ref = []\n",
    "omegas = []\n",
    "i_sd_idx = 0\n",
    "i_sq_idx = 1\n",
    "i_sd_ref_idx = -4\n",
    "i_sq_ref_idx = -3\n",
    "omega_idx = 2\n",
    "for i in range(1000):\n",
    "    action, _states = agent.predict(obs, deterministic=True)\n",
    "    obs, rewards, terminated, truncated, info = env.step(action)\n",
    "    done = terminated or truncated\n",
    "    i_ds.append(obs[i_sd_idx])\n",
    "    i_qs.append(obs[i_sq_idx])\n",
    "    i_ds_ref.append(obs[i_sd_ref_idx])\n",
    "    i_qs_ref.append(obs[i_sq_ref_idx])\n",
    "    omegas.append(obs[omega_idx])\n",
    "    if done:\n",
    "        obs,_ = env.reset()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "time_points = range(len(i_ds))\n",
    "fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(10, 10), sharex=True)\n",
    "\n",
    "# First subplot for i_ds and i_ds_ref\n",
    "ax1.plot(time_points, i_ds, label='i_d', color='blue')\n",
    "ax1.plot(time_points, i_ds_ref, label='i_d_ref', color='orange')\n",
    "ax1.set_ylabel('Normalized Values')\n",
    "ax1.legend()\n",
    "ax1.set_ylim([-1, 1])\n",
    "\n",
    "# Second subplot for i_qs and i_qs_ref\n",
    "ax2.plot(time_points, i_qs, label='i_q', color='blue')\n",
    "ax2.plot(time_points, i_qs_ref, label='i_q_ref', color='orange')\n",
    "ax2.set_ylabel('Normalized Values')\n",
    "ax2.legend()\n",
    "ax2.set_ylim([-1, 1])\n",
    "\n",
    "ax3.plot(time_points, omegas, label='omega', color='grey')\n",
    "ax3.set_xlabel('Time step')\n",
    "ax3.set_ylabel('Normalized Values')\n",
    "ax3.legend()\n",
    "ax3.set_ylim([-1,1])\n",
    "ax3.set_xlim([0, 1000])\n",
    "# Adjust layout\n",
    "plt.tight_layout()\n",
    "\n",
    "# Display the plot\n",
    "plt.show()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "GEM2v0",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
