{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "tutorial_helloworld_DQN_DDPG_PPO.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c1gUG3OCJ5GS"
      },
      "source": [
        "# Demo: ElegantRL_HelloWorld_tutorial (DQN --> DDPG --> PPO)\n",
        "\n",
        "We suggest to following this order to quickly learn about RL:\n",
        "- DQN (Deep Q Network), a basic RL algorithms in discrete action space.\n",
        "- DDPG (Deep Deterministic Policy Gradient), a basic RL algorithm in continuous action space.\n",
        "- PPO (Proximal Policy Gradient), a widely used RL algorithms in continuous action space.\n",
        "\n",
        "If you have any suggestion about ElegantRL Helloworld, you can discuss them in [ElegantRL issues/135: Suggestions for elegant_helloworld](https://github.com/AI4Finance-Foundation/ElegantRL/issues/135), and we will keep an eye on this issue.\n",
        "ElegantRL's code, especially the Helloworld, really needs a lot of feedback to be better."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DbamGVHC3AeW"
      },
      "source": [
        "# **Part 1: Install ElegantRL**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "U35bhkUqOqbS",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "42c4d1a1-3e31-40d4-de5a-511dad532915"
      },
      "source": [
        "# install elegantrl library\n",
        "!pip install git+https://github.com/AI4Finance-LLC/ElegantRL.git"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting git+https://github.com/AI4Finance-LLC/ElegantRL.git\n",
            "  Cloning https://github.com/AI4Finance-LLC/ElegantRL.git to /tmp/pip-req-build-120vlmei\n",
            "  Running command git clone -q https://github.com/AI4Finance-LLC/ElegantRL.git /tmp/pip-req-build-120vlmei\n",
            "Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (0.17.3)\n",
            "Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (3.2.2)\n",
            "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (1.21.6)\n",
            "Requirement already satisfied: pybullet in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (3.2.2)\n",
            "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (1.10.0+cu111)\n",
            "Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (4.1.2.30)\n",
            "Requirement already satisfied: box2d-py in /usr/local/lib/python3.7/dist-packages (from elegantrl==0.3.3) (2.3.8)\n",
            "Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->elegantrl==0.3.3) (1.5.0)\n",
            "Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->elegantrl==0.3.3) (1.4.1)\n",
            "Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->elegantrl==0.3.3) (1.3.0)\n",
            "Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->elegantrl==0.3.3) (0.16.0)\n",
            "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->elegantrl==0.3.3) (2.8.2)\n",
            "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->elegantrl==0.3.3) (0.11.0)\n",
            "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->elegantrl==0.3.3) (3.0.8)\n",
            "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->elegantrl==0.3.3) (1.4.2)\n",
            "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib->elegantrl==0.3.3) (4.1.1)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->elegantrl==0.3.3) (1.15.0)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "\n",
        "## **Part 2: Import ElegantRL helloworld**\n",
        "\n",
        "We hope that the `ElegantRL Helloworld` would help people who want to learn about reinforcement learning to quickly run a few introductory examples.\n",
        "- **Less lines of code**. (code lines <1000)\n",
        "- **Less packages requirements**. (only `torch` and `gym` )\n",
        "- **keep a consistent style with the full version of ElegantRL**.\n",
        "\n",
        "![File_structure of ElegantRL](https://github.com/AI4Finance-Foundation/ElegantRL/raw/master/figs/File_structure.png)\n",
        "\n",
        "One sentence summary: an agent `agent.py` with Actor-Critic networks `net.py` is trained `run.py` by interacting with an environment `env.py`.\n",
        "\n",
        "\n",
        "In this tutorial, we only need to download the directory from [elegantrl_helloworld](https://github.com/AI4Finance-Foundation/ElegantRL/tree/master/elegantrl_helloworld) using the following code.\n",
        "\n",
        "The files in `elegantrl_helloworld` including:\n",
        "`config.py`, `agent.py`, `net.py`, `env.py`, `run.py`"
      ],
      "metadata": {
        "id": "zJPivVxHMrAt"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!rm -r -f /content/elegantrl_helloworld  # remove if the directory exists\n",
        "!wget https://github.com/AI4Finance-Foundation/ElegantRL/raw/master/elegantrl_helloworld -P /content/"
      ],
      "metadata": {
        "id": "sw_gE-IpovQ4",
        "outputId": "b291f901-7b68-41ea-c261-96b8d6fe4fdc",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2022-04-21 03:32:48--  https://github.com/AI4Finance-Foundation/ElegantRL/raw/master/elegantrl_helloworld\n",
            "Resolving github.com (github.com)... 140.82.121.4\n",
            "Connecting to github.com (github.com)|140.82.121.4|:443... connected.\n",
            "HTTP request sent, awaiting response... 301 Moved Permanently\n",
            "Location: https://github.com/AI4Finance-Foundation/ElegantRL/tree/master/elegantrl_helloworld [following]\n",
            "--2022-04-21 03:32:49--  https://github.com/AI4Finance-Foundation/ElegantRL/tree/master/elegantrl_helloworld\n",
            "Reusing existing connection to github.com:443.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: unspecified [text/html]\n",
            "Saving to: ‘/content/elegantrl_helloworld’\n",
            "\n",
            "elegantrl_helloworl     [ <=>                ] 125.96K  --.-KB/s    in 0.03s   \n",
            "\n",
            "2022-04-21 03:32:49 (4.68 MB/s) - ‘/content/elegantrl_helloworld’ saved [128986]\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from elegantrl_helloworld.run import train_agent, evaluate_agent\n",
        "from elegantrl_helloworld.env import get_gym_env_args\n",
        "from elegantrl_helloworld.config import Arguments"
      ],
      "metadata": {
        "id": "nweGpiR1M0yA"
      },
      "execution_count": 7,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UVdmpnK_3Zcn"
      },
      "source": [
        "## **Part 3: Train DQN on discreted action space task.**\n",
        "\n",
        "Train DQN on [**Discreted action** space task `CartPole`](https://gym.openai.com/envs/CartPole-v1/)\n",
        "\n",
        "You can see [/helloworld/erl_config.py](https://github.com/AI4Finance-Foundation/ElegantRL/blob/master/helloworld/erl_config.py) to get more information about hyperparameter.\n",
        "\n",
        "```\n",
        "class Arguments:\n",
        "    def __init__(self, agent_class, env_func=None, env_args=None):\n",
        "        self.env_num = self.env_args['env_num']  # env_num = 1. In vector env, env_num > 1.\n",
        "        self.max_step = self.env_args['max_step']  # the max step of an episode\n",
        "        self.env_name = self.env_args['env_name']  # the env name. Be used to set 'cwd'.\n",
        "        self.state_dim = self.env_args['state_dim']  # vector dimension (feature number) of state\n",
        "        self.action_dim = self.env_args['action_dim']  # vector dimension (feature number) of action\n",
        "        self.if_discrete = self.env_args['if_discrete']  # discrete or continuous action space\n",
        "        ...\n",
        "```"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from elegantrl_helloworld.agent import AgentDQN\n",
        "agent_class = AgentDQN\n",
        "env_name = \"CartPole-v0\"\n",
        "\n",
        "import gym\n",
        "gym.logger.set_level(40)  # Block warning\n",
        "env = gym.make(env_name)\n",
        "env_func = gym.make\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.reward_scale = 2 ** 0  # an approximate target reward usually be closed to 256\n",
        "args.gamma = 0.97  # discount factor of future rewards\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step * 2  # collect target_step, then update network\n",
        "args.net_dim = 2 ** 7  # the middle layer dimension of Fully Connected Network\n",
        "args.num_layer = 3  # the layer number of MultiLayer Perceptron, `assert num_layer >= 2`\n",
        "args.batch_size = 2 ** 7  # num of transitions sampled from replay buffer.\n",
        "args.repeat_times = 2 ** 0  # repeatedly update network using ReplayBuffer to keep critic's loss small\n",
        "args.explore_rate = 0.25  # epsilon-greedy for exploration.\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 5  # number of times that get episode return\n",
        "args.eval_times = 2 ** 3  # number of times that get episode return\n",
        "args.break_step = int(8e4)  # break training if 'total_step > break_step'"
      ],
      "metadata": {
        "id": "AAPdjovQrTpE",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "df46a9ae-4d8c-4836-b471-f755282a5393"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'CartPole-v0',\n",
            "            'max_step': 200,\n",
            "            'state_dim': 4,\n",
            "            'action_dim': 2,\n",
            "            'if_discrete': True}\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Choose gpu id `0` using `args.learner_gpu = 0`. Set as `-1` or GPU is unavaliable, the training program will choose CPU automatically.\n",
        "\n",
        "- The cumulative returns of CartPole-v0  is ∈ (0, (1, 195), 200) \n",
        "- The cumulative returns of task_name is ∈ (min score, (score of random action, target score), max score)."
      ],
      "metadata": {
        "id": "Rq5LPOH2B0aw"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "args.learner_gpus = -1\n",
        "\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of CartPole-v0  is ∈ (0, (1, 195), 200)')"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "n7SBwVAkA8lA",
        "outputId": "a0385b35-5886-4a96-c55c-3d19606f9bb8"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "| Arguments Keep cwd: ./CartPole-v0_DQN_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 4.08e+02  ExpR     1.00  | ObjC     0.08  ObjA     0.89\n",
            "| Steps 1.79e+04  ExpR     1.00  | ObjC     0.25  ObjA    16.98\n",
            "| Steps 3.13e+04  ExpR     1.00  | ObjC     0.32  ObjA    26.03\n",
            "| Steps 4.11e+04  ExpR     1.00  | ObjC     0.93  ObjA    29.85\n",
            "| Steps 4.98e+04  ExpR     1.00  | ObjC     1.19  ObjA    31.70\n",
            "| Steps 5.72e+04  ExpR     1.00  | ObjC     0.06  ObjA    30.98\n",
            "| Steps 6.43e+04  ExpR     1.00  | ObjC     0.21  ObjA    31.57\n",
            "| Steps 7.06e+04  ExpR     1.00  | ObjC     0.81  ObjA    31.66\n",
            "| Steps 7.59e+04  ExpR     1.00  | ObjC     0.58  ObjA    32.71\n",
            "| Steps 8.03e+04  ExpR     1.00  | ObjC     0.49  ObjA    30.81\n",
            "| UsedTime: 299 | SavedDir: ./CartPole-v0_DQN_-1\n",
            "| Arguments Keep cwd: ./CartPole-v0_DQN_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 4.02e+02  | avgR     9.250  stdR     0.829  | avgS      9\n",
            "| Steps 4.08e+02  | avgR     9.125  stdR     0.599  | avgS      9\n",
            "| Steps 1.79e+04  | avgR    95.375  stdR     8.230  | avgS     95\n",
            "| Steps 1.85e+04  | avgR    99.625  stdR     7.777  | avgS    100\n",
            "| Steps 3.05e+04  | avgR    84.250  stdR     5.673  | avgS     84\n",
            "| Steps 3.13e+04  | avgR   133.625  stdR     4.442  | avgS    134\n",
            "| Steps 4.04e+04  | avgR   147.875  stdR     5.862  | avgS    148\n",
            "| Steps 4.11e+04  | avgR   122.625  stdR     4.794  | avgS    123\n",
            "| Steps 4.89e+04  | avgR   200.000  stdR     0.000  | avgS    200\n",
            "| Steps 4.98e+04  | avgR   194.875  stdR     6.827  | avgS    195\n",
            "| Steps 5.66e+04  | avgR   177.750  stdR    21.376  | avgS    178\n",
            "| Steps 5.72e+04  | avgR   184.750  stdR    14.403  | avgS    185\n",
            "| Steps 6.38e+04  | avgR   200.000  stdR     0.000  | avgS    200\n",
            "| Steps 6.43e+04  | avgR   152.250  stdR     7.446  | avgS    152\n",
            "| Steps 7.04e+04  | avgR   140.000  stdR     3.317  | avgS    140\n",
            "| Steps 7.06e+04  | avgR   120.500  stdR     4.664  | avgS    120\n",
            "| Steps 7.59e+04  | avgR    90.625  stdR     3.199  | avgS     91\n",
            "| Steps 7.60e+04  | avgR   190.625  stdR     9.027  | avgS    191\n",
            "| Steps 8.03e+04  | avgR   190.375  stdR    16.763  | avgS    190\n",
            "| Save learning curve in ./CartPole-v0_DQN_-1/LearningCurve_CartPole-v0_AgentDQN.jpg\n",
            "| The cumulative returns of CartPole-v0  is ∈ (0, (1, 195), 200)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Train DQN on [**Discreted action** space env `LunarLander`](https://gym.openai.com/envs/LunarLander-v2/)\n",
        "\n",
        "**You can pass and run codes below.**. Because DQN takes over 6000 seconds for training. It is too slow. (DuelingDoubleDQN taks less than 1000 second for training on LunarLander-v2 task.)\n",
        "\n",
        "And there are many other DQN variance algorithms which get higher cumulative returns and takes less time for training. See [examples/demo_DQN_Dueling_Double_DQN.py](https://github.com/AI4Finance-Foundation/ElegantRL/blob/master/examples/demo_DQN_Dueling_Double_DQN.py)"
      ],
      "metadata": {
        "id": "qK21xTxnHGOp"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from elegantrl_helloworld.agent import AgentDQN\n",
        "agent_class = AgentDQN\n",
        "env_name = \"LunarLander-v2\"\n",
        "\n",
        "import gym\n",
        "gym.logger.set_level(40)  # Block warning\n",
        "env = gym.make(env_name)\n",
        "env_func = gym.make\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.reward_scale = 2 ** 0\n",
        "args.gamma = 0.99\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step\n",
        "args.net_dim = 2 ** 7\n",
        "args.num_layer = 3\n",
        "\n",
        "args.batch_size = 2 ** 6\n",
        "\n",
        "args.repeat_times = 2 ** 0\n",
        "args.explore_noise = 0.125\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 7\n",
        "args.eval_times = 2 ** 4\n",
        "args.break_step = int(4e5)  # LunarLander needs a larger `break_step`\n",
        "\n",
        "args.learner_gpus = -1  # denotes use CPU\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of LunarLander-v2 is ∈ (-1800, (-600, 200), 340)')"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yH91VA17Hcsn",
        "outputId": "fc4e96cb-9000-4ead-d899-ff0722218929"
      },
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'LunarLander-v2',\n",
            "            'max_step': 1000,\n",
            "            'state_dim': 8,\n",
            "            'action_dim': 4,\n",
            "            'if_discrete': True}\n",
            "| Arguments Remove cwd: ./LunarLander-v2_DQN_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 1.12e+03  ExpR    -2.70  | ObjC     3.05  ObjA    -4.95\n",
            "| Steps 1.67e+04  ExpR    -0.34  | ObjC    11.37  ObjA   -49.77\n",
            "| Steps 3.32e+04  ExpR    -0.39  | ObjC    17.30  ObjA   -46.68\n",
            "| Steps 4.42e+04  ExpR    -0.11  | ObjC     1.10  ObjA   -27.90\n",
            "| Steps 5.71e+04  ExpR    -0.06  | ObjC     1.55  ObjA   -33.38\n",
            "| Steps 6.81e+04  ExpR     0.04  | ObjC     0.70  ObjA    32.92\n",
            "| Steps 7.83e+04  ExpR    -0.22  | ObjC     0.91  ObjA   -22.68\n",
            "| Steps 9.10e+04  ExpR    -0.31  | ObjC     0.83  ObjA    25.71\n",
            "| Steps 1.00e+05  ExpR    -0.11  | ObjC     6.26  ObjA    23.71\n",
            "| Steps 1.12e+05  ExpR    -0.22  | ObjC     0.62  ObjA    34.25\n",
            "| Steps 1.22e+05  ExpR    -0.34  | ObjC    14.76  ObjA    16.74\n",
            "| Steps 1.31e+05  ExpR     0.03  | ObjC     2.37  ObjA   -14.47\n",
            "| Steps 1.39e+05  ExpR     0.07  | ObjC     0.94  ObjA    27.17\n",
            "| Steps 1.50e+05  ExpR    -0.10  | ObjC     0.85  ObjA   -12.59\n",
            "| Steps 1.58e+05  ExpR    -0.62  | ObjC     1.40  ObjA    52.57\n",
            "| Steps 1.67e+05  ExpR     0.11  | ObjC     4.06  ObjA    26.94\n",
            "| Steps 1.76e+05  ExpR     0.08  | ObjC     0.94  ObjA    41.06\n",
            "| Steps 1.85e+05  ExpR     0.12  | ObjC     1.09  ObjA    27.66\n",
            "| Steps 1.94e+05  ExpR    -0.06  | ObjC     0.90  ObjA    26.59\n",
            "| Steps 2.01e+05  ExpR     0.19  | ObjC     0.73  ObjA    41.54\n",
            "| Steps 2.10e+05  ExpR    -0.01  | ObjC     0.84  ObjA    47.05\n",
            "| Steps 2.17e+05  ExpR    -0.06  | ObjC     0.46  ObjA    45.40\n",
            "| Steps 2.24e+05  ExpR     0.11  | ObjC     0.68  ObjA    49.06\n",
            "| Steps 2.32e+05  ExpR     0.18  | ObjC     0.55  ObjA    45.73\n",
            "| Steps 2.39e+05  ExpR     0.06  | ObjC    12.54  ObjA    38.81\n",
            "| Steps 2.46e+05  ExpR    -0.03  | ObjC     2.05  ObjA    37.78\n",
            "| Steps 2.52e+05  ExpR     0.07  | ObjC     0.81  ObjA    39.93\n",
            "| Steps 2.58e+05  ExpR     0.01  | ObjC     6.43  ObjA    25.01\n",
            "| Steps 2.65e+05  ExpR     0.13  | ObjC     3.54  ObjA    30.40\n",
            "| Steps 2.70e+05  ExpR    -0.02  | ObjC     1.04  ObjA    34.32\n",
            "| Steps 2.75e+05  ExpR    -0.09  | ObjC     1.18  ObjA    44.72\n",
            "| Steps 2.83e+05  ExpR    -0.11  | ObjC     0.38  ObjA    41.80\n",
            "| Steps 2.89e+05  ExpR    -0.08  | ObjC     3.91  ObjA    32.60\n",
            "| Steps 2.95e+05  ExpR    -0.01  | ObjC     0.93  ObjA    56.21\n",
            "| Steps 3.00e+05  ExpR     0.21  | ObjC     0.91  ObjA    51.45\n",
            "| Steps 3.06e+05  ExpR     0.04  | ObjC     2.16  ObjA    37.36\n",
            "| Steps 3.12e+05  ExpR     0.08  | ObjC     0.84  ObjA    28.29\n",
            "| Steps 3.18e+05  ExpR     0.13  | ObjC     0.57  ObjA    54.10\n",
            "| Steps 3.23e+05  ExpR     0.03  | ObjC     1.09  ObjA    10.06\n",
            "| Steps 3.30e+05  ExpR    -0.01  | ObjC     0.70  ObjA    51.98\n",
            "| Steps 3.36e+05  ExpR     0.05  | ObjC     2.70  ObjA    35.15\n",
            "| Steps 3.42e+05  ExpR    -0.02  | ObjC     1.05  ObjA    31.27\n",
            "| Steps 3.47e+05  ExpR     0.03  | ObjC    10.52  ObjA    33.38\n",
            "| Steps 3.52e+05  ExpR     0.14  | ObjC     0.75  ObjA    42.52\n",
            "| Steps 3.58e+05  ExpR     0.10  | ObjC     0.73  ObjA    50.54\n",
            "| Steps 3.63e+05  ExpR     0.13  | ObjC     0.74  ObjA    29.97\n",
            "| Steps 3.67e+05  ExpR    -0.06  | ObjC     1.20  ObjA    39.28\n",
            "| Steps 3.71e+05  ExpR     0.11  | ObjC     2.31  ObjA    10.67\n",
            "| Steps 3.76e+05  ExpR     0.16  | ObjC    17.53  ObjA     0.34\n",
            "| Steps 3.81e+05  ExpR     0.09  | ObjC     1.03  ObjA    33.35\n",
            "| Steps 3.86e+05  ExpR     0.14  | ObjC     2.96  ObjA    51.90\n",
            "| Steps 3.91e+05  ExpR    -0.02  | ObjC     0.71  ObjA    54.86\n",
            "| Steps 3.95e+05  ExpR    -0.41  | ObjC     1.60  ObjA    55.81\n",
            "| Steps 4.00e+05  ExpR    -0.22  | ObjC     0.64  ObjA    59.22\n",
            "| UsedTime: 7384 | SavedDir: ./LunarLander-v2_DQN_-1\n",
            "| Arguments Keep cwd: ./LunarLander-v2_DQN_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 1.12e+03  | avgR  -530.343  stdR   183.737  | avgS    103\n",
            "| Steps 1.67e+04  | avgR  -135.901  stdR    46.043  | avgS    587\n",
            "| Steps 3.32e+04  | avgR  -129.204  stdR    23.668  | avgS    951\n",
            "| Steps 4.42e+04  | avgR  -117.652  stdR    18.386  | avgS    836\n",
            "| Steps 5.71e+04  | avgR   -91.914  stdR    52.586  | avgS    960\n",
            "| Steps 6.81e+04  | avgR   -61.360  stdR    22.485  | avgS   1000\n",
            "| Steps 7.83e+04  | avgR   -69.344  stdR    88.238  | avgS    875\n",
            "| Steps 9.10e+04  | avgR  -119.279  stdR    18.228  | avgS    356\n",
            "| Steps 1.00e+05  | avgR   -43.961  stdR    59.145  | avgS    982\n",
            "| Steps 1.12e+05  | avgR   -67.955  stdR    31.958  | avgS    215\n",
            "| Steps 1.22e+05  | avgR   -35.413  stdR    29.490  | avgS    167\n",
            "| Steps 1.31e+05  | avgR   -52.169  stdR    36.981  | avgS    954\n",
            "| Steps 1.39e+05  | avgR   -34.231  stdR    72.037  | avgS    909\n",
            "| Steps 1.50e+05  | avgR    54.501  stdR    97.816  | avgS    292\n",
            "| Steps 1.58e+05  | avgR  -122.727  stdR    71.232  | avgS    213\n",
            "| Steps 1.67e+05  | avgR    12.992  stdR   129.530  | avgS    286\n",
            "| Steps 1.76e+05  | avgR    31.913  stdR   105.382  | avgS    652\n",
            "| Steps 1.85e+05  | avgR   105.082  stdR   110.050  | avgS    548\n",
            "| Steps 1.94e+05  | avgR    14.563  stdR   135.964  | avgS    704\n",
            "| Steps 2.01e+05  | avgR    34.576  stdR   112.856  | avgS    342\n",
            "| Steps 2.10e+05  | avgR     5.256  stdR    50.242  | avgS    226\n",
            "| Steps 2.17e+05  | avgR    47.514  stdR   111.849  | avgS    263\n",
            "| Steps 2.24e+05  | avgR   -24.851  stdR    51.499  | avgS    258\n",
            "| Steps 2.32e+05  | avgR   200.027  stdR   104.010  | avgS    520\n",
            "| Steps 2.39e+05  | avgR     8.645  stdR    63.075  | avgS    977\n",
            "| Steps 2.46e+05  | avgR   -49.730  stdR    16.815  | avgS   1000\n",
            "| Steps 2.52e+05  | avgR   -46.238  stdR    15.962  | avgS   1000\n",
            "| Steps 2.58e+05  | avgR     5.002  stdR   102.488  | avgS    757\n",
            "| Steps 2.65e+05  | avgR   -50.510  stdR    23.158  | avgS   1000\n",
            "| Steps 2.70e+05  | avgR    33.518  stdR    82.924  | avgS    905\n",
            "| Steps 2.75e+05  | avgR    -2.839  stdR    73.202  | avgS    960\n",
            "| Steps 2.83e+05  | avgR   -22.532  stdR    23.164  | avgS    948\n",
            "| Steps 2.89e+05  | avgR   -13.891  stdR    65.856  | avgS    116\n",
            "| Steps 2.95e+05  | avgR    82.550  stdR   120.489  | avgS    241\n",
            "| Steps 3.00e+05  | avgR   169.858  stdR    92.044  | avgS    387\n",
            "| Steps 3.06e+05  | avgR    18.055  stdR   104.873  | avgS    652\n",
            "| Steps 3.12e+05  | avgR   131.547  stdR   116.128  | avgS    597\n",
            "| Steps 3.18e+05  | avgR    17.401  stdR   123.102  | avgS    485\n",
            "| Steps 3.23e+05  | avgR   200.386  stdR   105.895  | avgS    324\n",
            "| Steps 3.30e+05  | avgR   145.581  stdR   121.974  | avgS    597\n",
            "| Steps 3.36e+05  | avgR    11.743  stdR   116.963  | avgS    712\n",
            "| Steps 3.42e+05  | avgR     1.907  stdR    41.075  | avgS    984\n",
            "| Steps 3.47e+05  | avgR    -2.172  stdR    25.627  | avgS   1000\n",
            "| Steps 3.52e+05  | avgR    98.222  stdR   108.038  | avgS    808\n",
            "| Steps 3.58e+05  | avgR   200.371  stdR    60.832  | avgS    535\n",
            "| Steps 3.63e+05  | avgR   164.193  stdR    70.218  | avgS    720\n",
            "| Steps 3.67e+05  | avgR    80.374  stdR   108.111  | avgS    795\n",
            "| Steps 3.71e+05  | avgR   173.379  stdR   105.248  | avgS    501\n",
            "| Steps 3.76e+05  | avgR   174.892  stdR    75.100  | avgS    667\n",
            "| Steps 3.81e+05  | avgR   138.224  stdR    88.497  | avgS    488\n",
            "| Steps 3.86e+05  | avgR   179.048  stdR    83.636  | avgS    440\n",
            "| Steps 3.91e+05  | avgR   -20.371  stdR    93.536  | avgS    156\n",
            "| Steps 3.95e+05  | avgR   -11.912  stdR   105.748  | avgS    153\n",
            "| Steps 4.00e+05  | avgR     9.476  stdR    72.703  | avgS    103\n",
            "| Save learning curve in ./LunarLander-v2_DQN_-1/LearningCurve_LunarLander-v2_AgentDQN.jpg\n",
            "| The cumulative returns of LunarLander-v2 is ∈ (-1800, (-600, 200), 340)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **Part 4: Train DDPG on continuous action space task.**\n",
        "\n",
        "Train DDPG on [**Continuous action** space env `Pendulum`](https://gym.openai.com/envs/Pendulum-v0/)\n",
        "\n",
        "We show a cunstom env in helloworld/erl_env.py `class PendulumEnv`](https://github.com/AI4Finance-Foundation/ElegantRL/blob/master/helloworld/erl_env.py#L19-L23)\n",
        "\n",
        "OpenAI Pendulum env set its action space as (-2, +2). It is bad. We suggest that adjust action space to (-1, +1) when designing your own env.\n"
      ],
      "metadata": {
        "id": "z2Ik5cDoyPGU"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from elegantrl_helloworld.config import Arguments\n",
        "from elegantrl_helloworld.run import train_agent, evaluate_agent\n",
        "from elegantrl_helloworld.env import get_gym_env_args\n",
        "from elegantrl_helloworld.agent import AgentDDPG\n",
        "agent_class = AgentDDPG\n",
        "\n",
        "from elegantrl_helloworld.env import PendulumEnv\n",
        "env = PendulumEnv('Pendulum-v0')  # PendulumEnv('Pendulum-v1')\n",
        "env_func = PendulumEnv\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.reward_scale = 2 ** -1  # RewardRange: -1800 < -200 < -50 < 0\n",
        "args.gamma = 0.97\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step * 2\n",
        "args.net_dim = 2 ** 7\n",
        "args.batch_size = 2 ** 7\n",
        "args.repeat_times = 2 ** 0\n",
        "args.explore_noise = 0.1\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 6\n",
        "args.eval_times = 2 ** 3\n",
        "args.break_step = int(1e5)\n",
        "\n",
        "args.learner_gpus = -1  # denotes use CPU\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of Pendulum-v1 is ∈ (-1600, (-1400, -200), 0)')"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wwkZXiHtyV6f",
        "outputId": "0e7c8c26-9b4b-42e3-de2a-9da7f76bf670"
      },
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "WARNING: env.action_space.high [2.]\n",
            "WARNING: env.action_space.low [-2.]\n",
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'Pendulum-v0',\n",
            "            'max_step': 200,\n",
            "            'state_dim': 3,\n",
            "            'action_dim': 1,\n",
            "            'if_discrete': False}\n",
            "| Arguments Remove cwd: ./Pendulum-v0_DDPG_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 4.00e+02  ExpR    -3.71  | ObjC     1.13  ObjA    -0.83\n",
            "| Steps 1.12e+04  ExpR    -0.32  | ObjC     0.32  ObjA   -39.47\n",
            "| Steps 1.96e+04  ExpR    -0.65  | ObjC     0.57  ObjA   -40.92\n",
            "| Steps 2.68e+04  ExpR    -0.16  | ObjC     0.32  ObjA   -27.83\n",
            "| Steps 3.32e+04  ExpR    -0.31  | ObjC     0.14  ObjA   -25.19\n",
            "| Steps 3.88e+04  ExpR    -0.32  | ObjC     1.16  ObjA   -24.87\n",
            "| Steps 4.40e+04  ExpR    -0.31  | ObjC     0.09  ObjA   -27.35\n",
            "| Steps 4.88e+04  ExpR    -0.46  | ObjC     0.11  ObjA   -16.21\n",
            "| Steps 5.36e+04  ExpR    -0.16  | ObjC     0.24  ObjA   -22.40\n",
            "| Steps 5.80e+04  ExpR    -0.32  | ObjC     0.10  ObjA   -18.49\n",
            "| Steps 6.20e+04  ExpR    -0.57  | ObjC     0.66  ObjA   -22.49\n",
            "| Steps 6.60e+04  ExpR    -0.31  | ObjC     0.13  ObjA   -20.69\n",
            "| Steps 7.00e+04  ExpR    -0.31  | ObjC     0.11  ObjA   -20.38\n",
            "| Steps 7.36e+04  ExpR    -0.30  | ObjC     0.12  ObjA   -15.18\n",
            "| Steps 7.72e+04  ExpR    -0.45  | ObjC     0.12  ObjA   -16.68\n",
            "| Steps 8.08e+04  ExpR    -0.45  | ObjC     0.08  ObjA   -19.84\n",
            "| Steps 8.40e+04  ExpR    -0.47  | ObjC     0.10  ObjA   -17.64\n",
            "| Steps 8.72e+04  ExpR    -0.31  | ObjC     0.07  ObjA   -19.66\n",
            "| Steps 9.04e+04  ExpR    -0.45  | ObjC     0.12  ObjA   -16.49\n",
            "| Steps 9.36e+04  ExpR    -0.31  | ObjC     0.04  ObjA   -14.00\n",
            "| Steps 9.64e+04  ExpR    -0.45  | ObjC     0.08  ObjA   -19.62\n",
            "| Steps 9.92e+04  ExpR    -0.62  | ObjC     0.10  ObjA   -15.54\n",
            "| UsedTime: 1453 | SavedDir: ./Pendulum-v0_DDPG_-1\n",
            "| Arguments Keep cwd: ./Pendulum-v0_DDPG_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 4.00e+02  | avgR -1348.050  stdR   271.455  | avgS    200\n",
            "| Steps 1.12e+04  | avgR  -217.716  stdR   128.023  | avgS    200\n",
            "| Steps 1.96e+04  | avgR  -183.775  stdR   152.724  | avgS    200\n",
            "| Steps 2.68e+04  | avgR  -153.652  stdR   114.133  | avgS    200\n",
            "| Steps 3.32e+04  | avgR  -135.305  stdR   105.206  | avgS    200\n",
            "| Steps 3.88e+04  | avgR  -115.037  stdR    39.912  | avgS    200\n",
            "| Steps 4.40e+04  | avgR  -168.948  stdR    58.590  | avgS    200\n",
            "| Steps 4.88e+04  | avgR  -139.247  stdR    72.530  | avgS    200\n",
            "| Steps 5.36e+04  | avgR  -163.751  stdR    56.435  | avgS    200\n",
            "| Steps 5.80e+04  | avgR  -138.569  stdR    39.777  | avgS    200\n",
            "| Steps 6.20e+04  | avgR  -109.240  stdR    39.215  | avgS    200\n",
            "| Steps 6.60e+04  | avgR  -122.601  stdR    82.795  | avgS    200\n",
            "| Steps 7.00e+04  | avgR  -108.108  stdR    73.511  | avgS    200\n",
            "| Steps 7.36e+04  | avgR  -166.443  stdR    56.351  | avgS    200\n",
            "| Steps 7.72e+04  | avgR  -109.545  stdR    38.913  | avgS    200\n",
            "| Steps 8.08e+04  | avgR  -184.884  stdR    60.408  | avgS    200\n",
            "| Steps 8.40e+04  | avgR  -166.229  stdR   113.710  | avgS    200\n",
            "| Steps 8.72e+04  | avgR  -194.961  stdR   100.822  | avgS    200\n",
            "| Steps 9.04e+04  | avgR  -191.578  stdR    56.208  | avgS    200\n",
            "| Steps 9.36e+04  | avgR  -136.921  stdR    69.107  | avgS    200\n",
            "| Steps 9.64e+04  | avgR  -175.967  stdR    77.816  | avgS    200\n",
            "| Steps 9.92e+04  | avgR  -150.688  stdR    51.610  | avgS    200\n",
            "| Save learning curve in ./Pendulum-v0_DDPG_-1/LearningCurve_Pendulum-v0_AgentDDPG.jpg\n",
            "| The cumulative returns of Pendulum-v1 is ∈ (-1600, (-1400, -200), 0)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3n8zcgcn14uq"
      },
      "source": [
        "# **Part 5: Train PPO on continuous action space task.**\n",
        "\n",
        "Train PPO on [**Continuous action** space env `Pendulum`](https://gym.openai.com/envs/Pendulum-v0/). \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "E03f6cTeajK4",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "0e62173c-b6af-4b36-9073-875c2f72fd73"
      },
      "source": [
        "from elegantrl_helloworld.config import Arguments\n",
        "from elegantrl_helloworld.run import train_agent, evaluate_agent\n",
        "from elegantrl_helloworld.env import get_gym_env_args\n",
        "from elegantrl_helloworld.agent import AgentPPO\n",
        "agent_class = AgentPPO\n",
        "\n",
        "from elegantrl_helloworld.env import PendulumEnv\n",
        "env = PendulumEnv()\n",
        "env_func = PendulumEnv\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.reward_scale = 2 ** -1  # RewardRange: -1800 < -200 < -50 < 0\n",
        "args.gamma = 0.97\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step * 8\n",
        "args.net_dim = 2 ** 7\n",
        "args.num_layer = 2\n",
        "args.batch_size = 2 ** 8\n",
        "args.repeat_times = 2 ** 5\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 6\n",
        "args.eval_times = 2 ** 3\n",
        "args.break_step = int(8e5)\n",
        "\n",
        "args.learner_gpus = -1\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of Pendulum-v1 is ∈ (-1600, (-1400, -200), 0)')"
      ],
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "WARNING: env.action_space.high [2.]\n",
            "WARNING: env.action_space.low [-2.]\n",
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'Pendulum-v0',\n",
            "            'max_step': 200,\n",
            "            'state_dim': 3,\n",
            "            'action_dim': 1,\n",
            "            'if_discrete': False}\n",
            "| Arguments Remove cwd: ./Pendulum-v0_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 1.60e+03  ExpR    -3.34  | ObjC    93.37  ObjA     0.02\n",
            "| Steps 9.76e+04  ExpR    -2.69  | ObjC    26.47  ObjA     0.13\n",
            "| Steps 1.95e+05  ExpR    -2.37  | ObjC    14.92  ObjA     0.12\n",
            "| Steps 2.94e+05  ExpR    -1.95  | ObjC    10.23  ObjA     0.03\n",
            "| Steps 3.94e+05  ExpR    -1.75  | ObjC     7.16  ObjA    -0.01\n",
            "| Steps 4.93e+05  ExpR    -0.87  | ObjC     6.48  ObjA     0.03\n",
            "| Steps 5.92e+05  ExpR    -0.67  | ObjC     5.73  ObjA    -0.04\n",
            "| Steps 6.93e+05  ExpR    -0.54  | ObjC     1.24  ObjA    -0.05\n",
            "| Steps 7.92e+05  ExpR    -0.57  | ObjC     1.65  ObjA    -0.21\n",
            "| UsedTime: 524 | SavedDir: ./Pendulum-v0_PPO_-1\n",
            "| Arguments Keep cwd: ./Pendulum-v0_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 1.60e+03  | avgR -1397.987  stdR   200.431  | avgS    200\n",
            "| Steps 9.76e+04  | avgR -1037.976  stdR   111.783  | avgS    200\n",
            "| Steps 1.95e+05  | avgR  -875.206  stdR   103.358  | avgS    200\n",
            "| Steps 2.94e+05  | avgR  -691.654  stdR    86.157  | avgS    200\n",
            "| Steps 3.94e+05  | avgR  -564.713  stdR   102.412  | avgS    200\n",
            "| Steps 4.93e+05  | avgR  -318.184  stdR   141.380  | avgS    200\n",
            "| Steps 5.92e+05  | avgR  -177.265  stdR   108.117  | avgS    200\n",
            "| Steps 6.93e+05  | avgR  -216.057  stdR   209.142  | avgS    200\n",
            "| Steps 7.92e+05  | avgR  -230.227  stdR    94.815  | avgS    200\n",
            "| Save learning curve in ./Pendulum-v0_PPO_-1/LearningCurve_Pendulum-v0_AgentPPO.jpg\n",
            "| The cumulative returns of Pendulum-v1 is ∈ (-1600, (-1400, -200), 0)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Train PPO on [**Continuous action** space env `LunarLanderContinuous`](https://gym.openai.com/envs/LunarLanderContinuous-v2/)"
      ],
      "metadata": {
        "id": "rcFcUkwfzHLE"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from elegantrl_helloworld.config import Arguments\n",
        "from elegantrl_helloworld.run import train_agent, evaluate_agent\n",
        "from elegantrl_helloworld.env import get_gym_env_args\n",
        "from elegantrl_helloworld.agent import AgentPPO\n",
        "agent_class = AgentPPO\n",
        "env_name = \"LunarLanderContinuous-v2\"\n",
        "\n",
        "import gym\n",
        "env = gym.make(env_name)\n",
        "env_func = gym.make\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.gamma = 0.99\n",
        "args.reward_scale = 2 ** -1\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step * 8\n",
        "args.num_layer = 3\n",
        "args.batch_size = 2 ** 7\n",
        "args.repeat_times = 2 ** 4\n",
        "args.lambda_entropy = 0.04\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 6\n",
        "args.eval_times = 2 ** 5\n",
        "args.break_step = int(4e5)\n",
        "\n",
        "args.learner_gpus = -1\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of LunarLanderContinuous-v2 is ∈ (-1800, (-300, 200), 310+)')"
      ],
      "metadata": {
        "id": "9WCAcmIfzGyE",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "03513c3c-e5b8-4f4a-b4f2-00ef753d7c95"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'LunarLanderContinuous-v2',\n",
            "            'max_step': 1000,\n",
            "            'state_dim': 8,\n",
            "            'action_dim': 2,\n",
            "            'if_discrete': False}\n",
            "| Arguments Remove cwd: ./LunarLanderContinuous-v2_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 8.10e+03  ExpR    -1.06  | ObjC    26.53  ObjA     0.02\n",
            "| Steps 5.63e+04  ExpR    -0.21  | ObjC     9.94  ObjA     0.01\n",
            "| Steps 1.05e+05  ExpR    -0.05  | ObjC    10.62  ObjA    -0.02\n",
            "| Steps 1.30e+05  ExpR    -0.01  | ObjC     8.52  ObjA     0.02\n",
            "| Steps 1.54e+05  ExpR     0.02  | ObjC     6.58  ObjA    -0.10\n",
            "| Steps 1.79e+05  ExpR     0.03  | ObjC     5.60  ObjA     0.11\n",
            "| Steps 2.04e+05  ExpR     0.02  | ObjC     6.21  ObjA     0.01\n",
            "| Steps 2.37e+05  ExpR     0.06  | ObjC     2.69  ObjA    -0.07\n",
            "| Steps 2.70e+05  ExpR     0.05  | ObjC     8.49  ObjA    -0.06\n",
            "| Steps 3.05e+05  ExpR     0.06  | ObjC     4.05  ObjA    -0.04\n",
            "| Steps 3.39e+05  ExpR     0.05  | ObjC     5.11  ObjA    -0.16\n",
            "| Steps 3.72e+05  ExpR     0.08  | ObjC     2.91  ObjA     0.01\n",
            "| Steps 4.05e+05  ExpR     0.07  | ObjC     4.80  ObjA    -0.10\n",
            "| UsedTime: 911 | SavedDir: ./LunarLanderContinuous-v2_PPO_-1\n",
            "| Arguments Keep cwd: ./LunarLanderContinuous-v2_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 8.10e+03  | avgR  -177.288  stdR   124.057  | avgS    151\n",
            "| Steps 5.63e+04  | avgR  -170.839  stdR   151.327  | avgS    146\n",
            "| Steps 1.05e+05  | avgR  -209.592  stdR    46.286  | avgS    152\n",
            "| Steps 1.30e+05  | avgR  -165.752  stdR   117.630  | avgS    182\n",
            "| Steps 1.54e+05  | avgR  -138.190  stdR    93.117  | avgS    184\n",
            "| Steps 1.79e+05  | avgR   -54.291  stdR   169.554  | avgS    236\n",
            "| Steps 2.04e+05  | avgR  -100.748  stdR   113.405  | avgS    225\n",
            "| Steps 2.37e+05  | avgR    -7.361  stdR   176.139  | avgS    288\n",
            "| Steps 2.70e+05  | avgR    10.899  stdR   189.107  | avgS    367\n",
            "| Steps 3.05e+05  | avgR    57.944  stdR   188.546  | avgS    342\n",
            "| Steps 3.39e+05  | avgR   118.386  stdR   157.707  | avgS    350\n",
            "| Steps 3.72e+05  | avgR   102.001  stdR   231.197  | avgS    338\n",
            "| Steps 4.05e+05  | avgR   198.335  stdR    90.755  | avgS    278\n",
            "| Save learning curve in ./LunarLanderContinuous-v2_PPO_-1/LearningCurve_LunarLanderContinuous-v2_AgentPPO.jpg\n",
            "| The cumulative returns of LunarLanderContinuous-v2 is ∈ (-1800, (-300, 200), 310+)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z1j5kLHF2dhJ"
      },
      "source": [
        "Train PPO on [**Continuous action** space env `BipedalWalker`](https://gym.openai.com/envs/BipedalWalker-v2/)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "KGOPSD6da23k",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "8b126d76-ea1d-40bb-f00c-05b59c1f9669"
      },
      "source": [
        "from elegantrl_helloworld.config import Arguments\n",
        "from elegantrl_helloworld.run import train_agent, evaluate_agent\n",
        "from elegantrl_helloworld.env import get_gym_env_args\n",
        "from elegantrl_helloworld.agent import AgentPPO\n",
        "agent_class = AgentPPO\n",
        "env_name = \"BipedalWalker-v3\"\n",
        "\n",
        "import gym\n",
        "env = gym.make(env_name)\n",
        "env_func = gym.make\n",
        "env_args = get_gym_env_args(env, if_print=True)\n",
        "\n",
        "args = Arguments(agent_class, env_func, env_args)\n",
        "\n",
        "'''reward shaping'''\n",
        "args.reward_scale = 2 ** -1\n",
        "args.gamma = 0.98\n",
        "\n",
        "'''network update'''\n",
        "args.target_step = args.max_step\n",
        "args.net_dim = 2 ** 8\n",
        "args.num_layer = 3\n",
        "args.batch_size = 2 ** 8\n",
        "args.repeat_times = 2 ** 4\n",
        "\n",
        "'''evaluate'''\n",
        "args.eval_gap = 2 ** 6\n",
        "args.eval_times = 2 ** 4\n",
        "args.break_step = int(1e6)\n",
        "\n",
        "args.learner_gpus = -1\n",
        "train_agent(args)\n",
        "evaluate_agent(args)\n",
        "print('| The cumulative returns of BipedalWalker-v3 is ∈ (-150, (-100, 280), 320+)')\n"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "env_args = {'env_num': 1,\n",
            "            'env_name': 'BipedalWalker-v3',\n",
            "            'max_step': 1600,\n",
            "            'state_dim': 24,\n",
            "            'action_dim': 4,\n",
            "            'if_discrete': False}\n",
            "| Arguments Remove cwd: ./BipedalWalker-v3_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `ExpR` denotes average rewards during exploration. The agent gets this rewards with noisy action.\n",
            "| `ObjC` denotes the objective of Critic network. Or call it loss function of critic network.\n",
            "| `ObjA` denotes the objective of Actor network. It is the average Q value of the critic network.\n",
            "\n",
            "| Steps 1.60e+03  ExpR    -0.02  | ObjC     0.10  ObjA     0.05\n",
            "| Steps 3.78e+04  ExpR    -0.05  | ObjC     1.02  ObjA     0.01\n",
            "| Steps 7.37e+04  ExpR    -0.05  | ObjC     0.80  ObjA     0.04\n",
            "| Steps 1.10e+05  ExpR    -0.02  | ObjC     0.03  ObjA    -0.01\n",
            "| Steps 1.45e+05  ExpR    -0.00  | ObjC     0.05  ObjA     0.01\n",
            "| Steps 1.82e+05  ExpR     0.01  | ObjC     0.05  ObjA     0.09\n",
            "| Steps 2.17e+05  ExpR     0.02  | ObjC     0.08  ObjA     0.05\n",
            "| Steps 2.52e+05  ExpR     0.03  | ObjC     0.06  ObjA     0.06\n",
            "| Steps 2.89e+05  ExpR     0.03  | ObjC     0.05  ObjA     0.08\n",
            "| Steps 3.24e+05  ExpR     0.03  | ObjC     0.08  ObjA     0.00\n",
            "| Steps 3.60e+05  ExpR     0.01  | ObjC     0.22  ObjA     0.02\n",
            "| Steps 3.96e+05  ExpR    -0.02  | ObjC     0.69  ObjA     0.06\n",
            "| Steps 4.32e+05  ExpR     0.04  | ObjC     0.05  ObjA     0.10\n",
            "| Steps 4.69e+05  ExpR    -0.02  | ObjC     1.39  ObjA     0.04\n",
            "| Steps 5.06e+05  ExpR    -0.04  | ObjC     1.72  ObjA     0.09\n",
            "| Steps 5.43e+05  ExpR     0.05  | ObjC     0.06  ObjA     0.10\n",
            "| Steps 5.79e+05  ExpR     0.05  | ObjC     0.09  ObjA    -0.02\n",
            "| Steps 6.15e+05  ExpR    -0.03  | ObjC     1.36  ObjA     0.10\n",
            "| Steps 6.51e+05  ExpR     0.05  | ObjC     0.08  ObjA     0.03\n",
            "| Steps 6.87e+05  ExpR     0.05  | ObjC     0.04  ObjA     0.07\n",
            "| Steps 7.24e+05  ExpR     0.05  | ObjC     0.04  ObjA    -0.01\n",
            "| Steps 7.62e+05  ExpR    -0.01  | ObjC     1.10  ObjA     0.04\n",
            "| Steps 7.99e+05  ExpR     0.03  | ObjC     0.65  ObjA     0.04\n",
            "| Steps 8.34e+05  ExpR     0.06  | ObjC     0.08  ObjA     0.08\n",
            "| Steps 8.70e+05  ExpR     0.07  | ObjC     0.16  ObjA    -0.02\n",
            "| Steps 9.06e+05  ExpR     0.07  | ObjC     0.08  ObjA     0.17\n",
            "| Steps 9.42e+05  ExpR     0.07  | ObjC     0.11  ObjA     0.11\n",
            "| Steps 9.78e+05  ExpR     0.07  | ObjC     0.17  ObjA     0.08\n",
            "| UsedTime: 1820 | SavedDir: ./BipedalWalker-v3_PPO_-1\n",
            "| Arguments Keep cwd: ./BipedalWalker-v3_PPO_-1\n",
            "\n",
            "| `Steps` denotes the number of samples, or the total training step, or the running times of `env.step()`.\n",
            "| `avgR` denotes average value of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `stdR` denotes standard dev of cumulative rewards, which is the sum of rewards in an episode.\n",
            "| `avgS` denotes the average number of steps in an episode.\n",
            "\n",
            "| Steps 1.60e+03  | avgR   -93.814  stdR     1.263  | avgS     88\n",
            "| Steps 3.78e+04  | avgR   -19.896  stdR     0.677  | avgS   1600\n",
            "| Steps 7.37e+04  | avgR   -37.093  stdR     4.867  | avgS   1600\n",
            "| Steps 1.10e+05  | avgR   -51.127  stdR     3.032  | avgS   1600\n",
            "| Steps 1.45e+05  | avgR   -48.666  stdR    29.034  | avgS   1365\n",
            "| Steps 1.82e+05  | avgR   -14.884  stdR    67.086  | avgS   1199\n",
            "| Steps 2.17e+05  | avgR    -3.247  stdR   111.651  | avgS    827\n",
            "| Steps 2.52e+05  | avgR   -98.306  stdR     1.053  | avgS    102\n",
            "| Steps 2.89e+05  | avgR    10.513  stdR   123.090  | avgS    788\n",
            "| Steps 3.24e+05  | avgR   172.433  stdR     4.698  | avgS   1600\n",
            "| Steps 3.60e+05  | avgR    43.901  stdR   119.611  | avgS   1145\n",
            "| Steps 3.96e+05  | avgR  -100.273  stdR     1.573  | avgS     90\n",
            "| Steps 4.32e+05  | avgR  -102.632  stdR     0.809  | avgS     83\n",
            "| Steps 4.69e+05  | avgR   -19.897  stdR    84.923  | avgS    686\n",
            "| Steps 5.06e+05  | avgR    -0.357  stdR    90.928  | avgS    796\n",
            "| Steps 5.43e+05  | avgR   122.198  stdR   106.949  | avgS   1328\n",
            "| Steps 5.79e+05  | avgR   156.253  stdR    54.301  | avgS   1546\n",
            "| Steps 6.15e+05  | avgR   112.193  stdR   133.756  | avgS   1195\n",
            "| Steps 6.51e+05  | avgR   167.152  stdR    94.265  | avgS   1433\n",
            "| Steps 6.87e+05  | avgR   188.158  stdR    48.614  | avgS   1556\n",
            "| Steps 7.24e+05  | avgR   145.986  stdR   125.801  | avgS   1287\n",
            "| Steps 7.62e+05  | avgR   135.115  stdR    95.265  | avgS   1337\n",
            "| Steps 7.99e+05  | avgR   228.162  stdR     6.880  | avgS   1600\n",
            "| Steps 8.34e+05  | avgR   233.372  stdR     4.996  | avgS   1600\n",
            "| Steps 8.70e+05  | avgR   266.571  stdR     3.626  | avgS   1576\n",
            "| Steps 9.06e+05  | avgR   229.514  stdR    91.325  | avgS   1431\n",
            "| Steps 9.42e+05  | avgR    57.267  stdR   168.962  | avgS    748\n",
            "| Steps 9.78e+05  | avgR   206.148  stdR   119.650  | avgS   1338\n",
            "| Save learning curve in ./BipedalWalker-v3_PPO_-1/LearningCurve_BipedalWalker-v3_AgentPPO.jpg\n",
            "| The cumulative returns of BipedalWalker-v3 is ∈ (-150, (-100, 280), 320+)\n"
          ]
        }
      ]
    }
  ]
}