{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1-数学基础"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 随机抽样：\n",
    "箱子里有10 个球。抽到R，G,B概率分别是0.2，0.5，0.3，每次都把球放回去，抽样10次结果是什么？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from numpy.random import choice\n",
    "samples = choice(['R', 'G', 'B'],\n",
    "size=10,\n",
    "p=[0.2, 0.5, 0.3])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array(['R', 'B', 'G', 'G', 'B', 'G', 'G', 'G', 'R', 'G'], dtype='<U1')"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "samples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 离散值期望\n",
    "$E{_X\\sim p\\left(\\cdot\\right)^.}\\left[h\\left(X\\right)\\right]=\\sum_{x\\in\\mathcal{X}} p\\left(x\\right)\\cdot h\\left(x\\right)$\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.5"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#计算抛硬币的数学期望\n",
    "from numpy.random import choice\n",
    "value= [0, 1]\n",
    "p=[0.5, 0.5]\n",
    "#计算期望E\n",
    "E = sum(value[i]*p[i] for i in range(len(value)))\n",
    "E"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 连续值求期望\n",
    "\n",
    "假设我们有一个均匀分布的随机变量 X，它在区间 [a, b] 上取值，其中a和b是区间的两个端点。这个均匀分布的概率密度函数（PDF）是：\n",
    "\n",
    "$f(x \\mid a, b)= \\begin{cases}\\frac{1}{b-a} & \\text { for } a \\leq x \\leq b \\\\ 0 & \\text { otherwise }\\end{cases}$\n",
    "\n",
    "$E(X)=\\int_{-\\infty}^{\\infty} x f(x) d x$ \n",
    "\n",
    "带入概率密度函数：\n",
    "$E(X)=\\int_a^b x \\frac{1}{b-a} d x$\n",
    "\n",
    "计算得出，均匀分布 $U(a, b)$ 的期望值 $E(X)$ 为:\n",
    "\n",
    "$$\n",
    "E(X)=\\frac{a^2}{2(a-b)}-\\frac{b^2}{2(a-b)}\n",
    "$$\n",
    "\n",
    "\n",
    "简化这个表达式，我们得到:\n",
    "\n",
    "$$\n",
    "E(X)=\\frac{a+b}{2}\n",
    "$$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5.0"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#计算从线段[a,b]中任选一个点,的数学期望\n",
    "from numpy.random import uniform\n",
    "a = 0\n",
    "b = 10\n",
    "#计算期望E\n",
    "E= (a+b)/2\n",
    "E\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **状态空间**: $S$  \n",
    "  表示所有可能的状态集合。\n",
    "  \n",
    "- **动作空间**: $A$  \n",
    "  表示所有可能的动作集合。\n",
    "\n",
    "- **状态转移函数**: $P(s' | s, a)$  \n",
    "  表示在状态 $s$ 采取动作 $a$ 后，转移到下一状态 $s'$ 的概率。\n",
    "\n",
    "- **奖励函数**: $R(s, a)$ 或 $R(s, a, s')$  \n",
    "  表示在状态 $s$ 采取动作 $a$ 并转移到下一状态 $s'$ 后得到的即时奖励。\n",
    "\n",
    "- **折扣因子**: $\\gamma$  \n",
    "  用来折扣未来奖励的权重，通常 $0 \\leq \\gamma \\leq 1$。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2-环境\n",
    "### 安装需要的包\n",
    "https://gymnasium.farama.org/\n",
    "```sh\n",
    "### 动手学使用的gym版本为0.10.5，numpy1.26.4\n",
    "conda create -n rl2024 --clone dl2024\n",
    "### 旧版本可以选择装上gym==0.26.2\n",
    "新版本对numpy有要求 gymnasium （open ai gym 2022） numpy<2.0.0\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "\n",
    "# %pip  install \"gym==0.26.2\" 'pettingzoo==1.23.1' -i https://pypi.tuna.tsinghua.edu.cn/simple \n",
    "\n",
    "\n",
    "%pip install gymnasium comet_ml>=3.44.1 stable-baselines3 \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Gymnasium 是一个强化学习（Reinforcement Learning，RL）的工具包，它提供了各种强化学习的环境，允许开发者创建、运行和测试不同的RL算法。[classic-control] 是一种额外的依赖选项，包含经典控制论环境适用于控制任务的实验和研究。\n",
    "\n",
    "    |CartPole-v1| - —|一个简单的平衡杆环境|\n",
    "    |-|-|-|\n",
    "    |MountainCar-v0| - |一个小车爬坡的问题|\n",
    "    |Pendulum-v1| - |处理倒立摆的控制|\n",
    "    |Acrobot-v1| - |控制两个连接摆臂的旋转系统|\n",
    "\n",
    "- Stable Baselines3 是一个强化学习库，提供了一些常用的RL算法，如PPO（Proximal Policy Optimization）、DQN（Deep Q-Network）、A2C（Advantage Actor-Critic）等。它建立在 PyTorch 之上，专为研究和应用强化学习算法而设计。\n",
    "Stable Baselines3 主要用于快速实现和测试RL算法，适用于研究人员和开发人员。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: gym==0.26.2 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (0.26.2)\n",
      "Requirement already satisfied: pettingzoo==1.23.1 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (1.23.1)\n",
      "Requirement already satisfied: numpy>=1.18.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gym==0.26.2) (1.25.0)\n",
      "Requirement already satisfied: cloudpickle>=1.2.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gym==0.26.2) (3.1.0)\n",
      "Requirement already satisfied: importlib_metadata>=4.8.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gym==0.26.2) (8.0.0)\n",
      "Requirement already satisfied: gym_notices>=0.0.4 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gym==0.26.2) (0.0.8)\n",
      "Requirement already satisfied: gymnasium>=0.28.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from pettingzoo==1.23.1) (1.0.0)\n",
      "Requirement already satisfied: typing-extensions>=4.3.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium>=0.28.0->pettingzoo==1.23.1) (4.11.0)\n",
      "Requirement already satisfied: farama-notifications>=0.0.1 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium>=0.28.0->pettingzoo==1.23.1) (0.0.4)\n",
      "Requirement already satisfied: zipp>=0.5 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from importlib_metadata>=4.8.0->gym==0.26.2) (3.16.2)\n",
      "Note: you may need to restart the kernel to use updated packages.\n",
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: swig in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (4.2.1.post0)\n",
      "Note: you may need to restart the kernel to use updated packages.\n",
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Requirement already satisfied: gymnasium[box2d] in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (1.0.0)\n",
      "Requirement already satisfied: numpy>=1.21.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (1.25.0)\n",
      "Requirement already satisfied: cloudpickle>=1.2.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (3.1.0)\n",
      "Requirement already satisfied: typing-extensions>=4.3.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (4.11.0)\n",
      "Requirement already satisfied: farama-notifications>=0.0.1 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (0.0.4)\n",
      "Requirement already satisfied: importlib-metadata>=4.8.0 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (8.0.0)\n",
      "Requirement already satisfied: box2d-py==2.3.5 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (2.3.5)\n",
      "Requirement already satisfied: pygame>=2.1.3 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (2.6.1)\n",
      "Requirement already satisfied: swig==4.* in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from gymnasium[box2d]) (4.2.1.post0)\n",
      "Requirement already satisfied: zipp>=0.5 in d:\\anaconda\\envs\\rl2024\\lib\\site-packages (from importlib-metadata>=4.8.0->gymnasium[box2d]) (3.16.2)\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "#pip install gym==0.18.3如果遇到问题，需要修改gym版本\n",
    "%pip  install gym==0.26.2 pettingzoo==1.23.1 -i https://pypi.tuna.tsinghua.edu.cn/simple\n",
    "%pip  install swig \n",
    "%pip install gymnasium[box2d] -i https://pypi.tuna.tsinghua.edu.cn/simple\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 下面的代码无法加载用户窗口，仅仅用于演示如何使用，正常的强化学习模型我们也不会输出视频，它太过占用内存。\n",
    "真正的代码在：[游戏地址](1-mygym\\2-spaceload.py)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Step: 1, Action: 1, Reward: -0.9251829764085346, Total Reward: -0.9251829764085346\n",
      "Step: 1, Action: 3, Reward: 1.0076807161108559, Total Reward: 1.0076807161108559\n",
      "Step: 1, Action: 1, Reward: -1.6504256451222534, Total Reward: -1.6504256451222534\n",
      "Step: 1, Action: 1, Reward: -1.9011581789744116, Total Reward: -1.9011581789744116\n",
      "Step: 1, Action: 0, Reward: -1.2662348687936174, Total Reward: -1.2662348687936174\n",
      "Step: 1, Action: 3, Reward: -0.5162120462368012, Total Reward: -0.5162120462368012\n",
      "Step: 1, Action: 1, Reward: -2.454141981392836, Total Reward: -2.454141981392836\n",
      "Step: 1, Action: 1, Reward: -2.76306103419344, Total Reward: -2.76306103419344\n",
      "Step: 1, Action: 0, Reward: -1.978643324424695, Total Reward: -1.978643324424695\n",
      "Step: 1, Action: 1, Reward: -2.9904709591645244, Total Reward: -2.9904709591645244\n",
      "Step: 1, Action: 2, Reward: 0.17983638821375508, Total Reward: 0.17983638821375508\n",
      "Step: 1, Action: 2, Reward: 0.3743690701108278, Total Reward: 0.3743690701108278\n",
      "Step: 1, Action: 2, Reward: 0.45259648796617286, Total Reward: 0.45259648796617286\n",
      "Step: 1, Action: 3, Reward: -1.32292190486089, Total Reward: -1.32292190486089\n",
      "Step: 1, Action: 0, Reward: -2.1284975408341325, Total Reward: -2.1284975408341325\n",
      "Step: 1, Action: 2, Reward: 1.0371382319902238, Total Reward: 1.0371382319902238\n",
      "Step: 1, Action: 3, Reward: -1.3052388397628636, Total Reward: -1.3052388397628636\n",
      "Step: 1, Action: 2, Reward: -0.7259766280171391, Total Reward: -0.7259766280171391\n",
      "Step: 1, Action: 3, Reward: -0.8775838544946726, Total Reward: -0.8775838544946726\n",
      "Step: 1, Action: 0, Reward: -1.735037516685452, Total Reward: -1.735037516685452\n",
      "Step: 1, Action: 1, Reward: -2.6580379872857223, Total Reward: -2.6580379872857223\n",
      "Step: 1, Action: 0, Reward: -1.9332366438694635, Total Reward: -1.9332366438694635\n",
      "Step: 1, Action: 2, Reward: 0.8185949204077303, Total Reward: 0.8185949204077303\n",
      "Step: 1, Action: 1, Reward: -2.8131629979759496, Total Reward: -2.8131629979759496\n",
      "Step: 1, Action: 3, Reward: -1.2283803678074878, Total Reward: -1.2283803678074878\n",
      "Step: 1, Action: 0, Reward: -1.8355926454327118, Total Reward: -1.8355926454327118\n",
      "Step: 1, Action: 1, Reward: -2.68604777676549, Total Reward: -2.68604777676549\n",
      "Step: 1, Action: 0, Reward: -1.9870486478574492, Total Reward: -1.9870486478574492\n",
      "Step: 1, Action: 2, Reward: 1.1308229757030517, Total Reward: 1.1308229757030517\n",
      "Step: 1, Action: 0, Reward: -2.046398261773362, Total Reward: -2.046398261773362\n",
      "Step: 1, Action: 3, Reward: -1.2026557410109706, Total Reward: -1.2026557410109706\n",
      "Step: 1, Action: 2, Reward: -0.4411240751500884, Total Reward: -0.4411240751500884\n",
      "Step: 1, Action: 3, Reward: -0.8817245309646171, Total Reward: -0.8817245309646171\n",
      "Step: 1, Action: 3, Reward: -0.7804556049580629, Total Reward: -0.7804556049580629\n",
      "Step: 1, Action: 1, Reward: -2.0324284363080936, Total Reward: -2.0324284363080936\n",
      "Step: 1, Action: 1, Reward: -2.2543963549213673, Total Reward: -2.2543963549213673\n",
      "Step: 1, Action: 1, Reward: -2.203681930075702, Total Reward: -2.203681930075702\n",
      "Step: 1, Action: 2, Reward: 1.7883803232999014, Total Reward: 1.7883803232999014\n",
      "Step: 1, Action: 3, Reward: -0.9191287018407468, Total Reward: -0.9191287018407468\n",
      "Step: 1, Action: 3, Reward: -0.6002924771138669, Total Reward: -0.6002924771138669\n",
      "Step: 1, Action: 1, Reward: -2.1000654191786325, Total Reward: -2.1000654191786325\n",
      "Step: 1, Action: 0, Reward: -1.34366620311161, Total Reward: -1.34366620311161\n",
      "Step: 1, Action: 2, Reward: 3.2096609301471064, Total Reward: 3.2096609301471064\n",
      "Step: 1, Action: 3, Reward: -0.4747879703354545, Total Reward: -0.4747879703354545\n",
      "Step: 1, Action: 1, Reward: -2.0505572893681845, Total Reward: -2.0505572893681845\n",
      "Step: 1, Action: 2, Reward: 3.1010322629115707, Total Reward: 3.1010322629115707\n",
      "Step: 1, Action: 0, Reward: -1.3943119979551, Total Reward: -1.3943119979551\n",
      "Step: 1, Action: 1, Reward: -2.234607962289543, Total Reward: -2.234607962289543\n",
      "Step: 1, Action: 1, Reward: -2.4836665394797977, Total Reward: -2.4836665394797977\n",
      "Step: 1, Action: 1, Reward: -2.6788338226039046, Total Reward: -2.6788338226039046\n",
      "Step: 1, Action: 0, Reward: -1.9068144343878544, Total Reward: -1.9068144343878544\n",
      "Step: 1, Action: 1, Reward: -2.487513985484638, Total Reward: -2.487513985484638\n",
      "Step: 1, Action: 2, Reward: 1.3752462534881544, Total Reward: 1.3752462534881544\n",
      "Step: 1, Action: 2, Reward: 1.0075428884117172, Total Reward: 1.0075428884117172\n",
      "Step: 1, Action: 2, Reward: 1.5297262479114238, Total Reward: 1.5297262479114238\n",
      "Step: 1, Action: 3, Reward: -1.1628059390145313, Total Reward: -1.1628059390145313\n",
      "Step: 1, Action: 0, Reward: -1.8761841460188862, Total Reward: -1.8761841460188862\n",
      "Step: 1, Action: 2, Reward: 0.3741556539371061, Total Reward: 0.3741556539371061\n",
      "Step: 1, Action: 2, Reward: 0.2684493052241635, Total Reward: 0.2684493052241635\n",
      "Step: 1, Action: 3, Reward: -1.0694931291099283, Total Reward: -1.0694931291099283\n",
      "Step: 1, Action: 1, Reward: -2.442230200550425, Total Reward: -2.442230200550425\n",
      "Step: 1, Action: 0, Reward: -1.7797072228340767, Total Reward: -1.7797072228340767\n",
      "Step: 1, Action: 0, Reward: -1.7527170397856935, Total Reward: -1.7527170397856935\n",
      "Step: 1, Action: 0, Reward: -1.725682740709658, Total Reward: -1.725682740709658\n",
      "Step: 1, Action: 2, Reward: -2.0351108021048274, Total Reward: -2.0351108021048274\n",
      "Step: 1, Action: 0, Reward: -1.594002115566127, Total Reward: -1.594002115566127\n",
      "Step: 1, Action: 3, Reward: -0.8951479104947839, Total Reward: -0.8951479104947839\n",
      "Step: 1, Action: 1, Reward: -2.2577604706986834, Total Reward: -2.2577604706986834\n",
      "Step: 1, Action: 1, Reward: -2.6305452034030723, Total Reward: -2.6305452034030723\n",
      "Step: 1, Action: 3, Reward: -1.0366534975272816, Total Reward: -1.0366534975272816\n",
      "Step: 1, Action: 2, Reward: 0.0638193926165343, Total Reward: 0.0638193926165343\n",
      "Step: 1, Action: 1, Reward: -2.5881549143878417, Total Reward: -2.5881549143878417\n",
      "Step: 1, Action: 0, Reward: -1.8274539720103746, Total Reward: -1.8274539720103746\n",
      "Step: 1, Action: 0, Reward: -1.8282142301956412, Total Reward: -1.8282142301956412\n",
      "Step: 1, Action: 1, Reward: -2.9909516386787787, Total Reward: -2.9909516386787787\n",
      "Step: 1, Action: 3, Reward: -1.4098782808842987, Total Reward: -1.4098782808842987\n",
      "Step: 1, Action: 0, Reward: -1.961805047340647, Total Reward: -1.961805047340647\n",
      "Step: 1, Action: 2, Reward: -1.0023123394511402, Total Reward: -1.0023123394511402\n",
      "Step: 1, Action: 1, Reward: -3.077917387165654, Total Reward: -3.077917387165654\n",
      "Step: 1, Action: 3, Reward: -1.5454805205380058, Total Reward: -1.5454805205380058\n",
      "Step: 1, Action: 0, Reward: -2.244054646445818, Total Reward: -2.244054646445818\n",
      "Step: 1, Action: 0, Reward: -2.339523119010039, Total Reward: -2.339523119010039\n",
      "Step: 1, Action: 0, Reward: -2.456953816465159, Total Reward: -2.456953816465159\n",
      "Step: 1, Action: 2, Reward: -2.6867355299467475, Total Reward: -2.6867355299467475\n",
      "Step: 1, Action: 3, Reward: -1.7351540829619398, Total Reward: -1.7351540829619398\n",
      "Step: 1, Action: 1, Reward: -3.5748689369810607, Total Reward: -3.5748689369810607\n",
      "Step: 1, Action: 2, Reward: -2.6001923189109446, Total Reward: -2.6001923189109446\n",
      "Step: 1, Action: 1, Reward: -4.485296657295378, Total Reward: -4.485296657295378\n",
      "Step: 1, Action: 2, Reward: -2.972986202451909, Total Reward: -2.972986202451909\n",
      "Step: 1, Action: 1, Reward: -5.114279342188325, Total Reward: -5.114279342188325\n",
      "Step: 1, Action: 1, Reward: 4.352535300987626, Total Reward: 4.352535300987626\n",
      "Step: 1, Action: 0, Reward: 66.86675173625139, Total Reward: 66.86675173625139\n",
      "Step: 1, Action: 1, Reward: -11.650376134628116, Total Reward: -11.650376134628116\n",
      "Step: 1, Action: 1, Reward: -5.364052823480789, Total Reward: -5.364052823480789\n",
      "Step: 1, Action: 3, Reward: 8.38533355858067, Total Reward: 8.38533355858067\n",
      "Step: 1, Action: 0, Reward: 4.293427205409074, Total Reward: 4.293427205409074\n",
      "Step: 1, Action: 3, Reward: 5.406508870345193, Total Reward: 5.406508870345193\n",
      "Step: 1, Action: 1, Reward: 3.4360456394311156, Total Reward: 3.4360456394311156\n",
      "Step: 1, Action: 0, Reward: 3.9331353248114453, Total Reward: 3.9331353248114453\n",
      "Step: 1, Action: 2, Reward: 0.08377928664425555, Total Reward: 0.08377928664425555\n",
      "Step: 1, Action: 3, Reward: -5.58503276427865, Total Reward: -5.58503276427865\n",
      "Step: 1, Action: 3, Reward: 4.812573750604259, Total Reward: 4.812573750604259\n",
      "Step: 1, Action: 1, Reward: 2.6039587343814516, Total Reward: 2.6039587343814516\n",
      "Step: 1, Action: 3, Reward: 4.375868146269938, Total Reward: 4.375868146269938\n",
      "Step: 1, Action: 1, Reward: 2.5614696797976264, Total Reward: 2.5614696797976264\n",
      "Step: 1, Action: 3, Reward: -100, Total Reward: -100\n"
     ]
    }
   ],
   "source": [
    "\n",
    "import gymnasium as gym\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 创建环境并设置为 rgb_array 模式\n",
    "env = gym.make(\"LunarLander-v3\", render_mode=\"rgb_array\")\n",
    "observation, info = env.reset()\n",
    "\n",
    "episode_over = False\n",
    "while not episode_over:\n",
    "    action = env.action_space.sample()\n",
    "    observation, reward, terminated, truncated, info = env.step(action)\n",
    "    \n",
    "    # 渲染并显示图像\n",
    "    # frame = env.render()\n",
    "    # plt.imshow(frame)\n",
    "    # plt.axis(\"off\")\n",
    "    # plt.pause(0.01)  # 暂停以更新图像\n",
    "\n",
    "    step_count = 0\n",
    "    total_reward = 0\n",
    "    step_count += 1\n",
    "    total_reward += reward\n",
    "    print(f\"Step: {step_count}, Action: {action}, Reward: {reward}, Total Reward: {total_reward}\")\n",
    "    \n",
    "    episode_over = terminated or truncated\n",
    "\n",
    "env.close()\n",
    "# plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### using StableBaselines3 A2C Algorithm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Cell \u001b[1;32mIn[8], line 2\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mcomet_ml\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mintegration\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mgymnasium\u001b[39;00m \u001b[39mimport\u001b[39;00m CometLogger  \u001b[39m# 导入CometLogger,是用于记录gymnasium环境的日志的工具\u001b[39;00m\n\u001b[1;32m----> 2\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m \u001b[39mimport\u001b[39;00m A2C   \u001b[39m# 导入A2C算法,是用于强化学习的算法\u001b[39;00m\n\u001b[0;32m      3\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mgymnasium\u001b[39;00m \u001b[39mas\u001b[39;00m \u001b[39mgym\u001b[39;00m \u001b[39m#\u001b[39;00m\n\u001b[0;32m      4\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mcomet_ml\u001b[39;00m \u001b[39mimport\u001b[39;00m Experiment\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\stable_baselines3\\__init__.py:3\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mos\u001b[39;00m\n\u001b[1;32m----> 3\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39ma2c\u001b[39;00m \u001b[39mimport\u001b[39;00m A2C\n\u001b[0;32m      4\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mcommon\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m \u001b[39mimport\u001b[39;00m get_system_info\n\u001b[0;32m      5\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mddpg\u001b[39;00m \u001b[39mimport\u001b[39;00m DDPG\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\stable_baselines3\\a2c\\__init__.py:1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39ma2c\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39ma2c\u001b[39;00m \u001b[39mimport\u001b[39;00m A2C\n\u001b[0;32m      2\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mstable_baselines3\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39ma2c\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mpolicies\u001b[39;00m \u001b[39mimport\u001b[39;00m CnnPolicy, MlpPolicy, MultiInputPolicy\n\u001b[0;32m      4\u001b[0m __all__ \u001b[39m=\u001b[39m [\u001b[39m\"\u001b[39m\u001b[39mCnnPolicy\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mMlpPolicy\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mMultiInputPolicy\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mA2C\u001b[39m\u001b[39m\"\u001b[39m]\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\stable_baselines3\\a2c\\a2c.py:3\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtyping\u001b[39;00m \u001b[39mimport\u001b[39;00m Any, ClassVar, Dict, Optional, Type, TypeVar, Union\n\u001b[1;32m----> 3\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mtorch\u001b[39;00m \u001b[39mas\u001b[39;00m \u001b[39mth\u001b[39;00m\n\u001b[0;32m      4\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mgymnasium\u001b[39;00m \u001b[39mimport\u001b[39;00m spaces\n\u001b[0;32m      5\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mnn\u001b[39;00m \u001b[39mimport\u001b[39;00m functional \u001b[39mas\u001b[39;00m F\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:1007\u001b[0m, in \u001b[0;36m_find_and_load\u001b[1;34m(name, import_)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:986\u001b[0m, in \u001b[0;36m_find_and_load_unlocked\u001b[1;34m(name, import_)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:680\u001b[0m, in \u001b[0;36m_load_unlocked\u001b[1;34m(spec)\u001b[0m\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\comet_ml\\monkey_patching.py:78\u001b[0m, in \u001b[0;36mCustomFileLoader.exec_module\u001b[1;34m(self, module)\u001b[0m\n\u001b[0;32m     75\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mexec_module\u001b[39m(\u001b[39mself\u001b[39m, module):\n\u001b[0;32m     76\u001b[0m     \u001b[39m# Execute the module source code to define all the objects\u001b[39;00m\n\u001b[0;32m     77\u001b[0m     \u001b[39mif\u001b[39;00m \u001b[39mhasattr\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mloader, \u001b[39m\"\u001b[39m\u001b[39mexec_module\u001b[39m\u001b[39m\"\u001b[39m):\n\u001b[1;32m---> 78\u001b[0m         \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mloader\u001b[39m.\u001b[39;49mexec_module(module)\n\u001b[0;32m     79\u001b[0m     \u001b[39melse\u001b[39;00m:\n\u001b[0;32m     80\u001b[0m         \u001b[39m# zipimporter doesn't use exec_module\u001b[39;00m\n\u001b[0;32m     81\u001b[0m         module \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mloader\u001b[39m.\u001b[39mload_module(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mfullname)\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\__init__.py:1253\u001b[0m\n\u001b[0;32m   1251\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbackends\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mopenmp\u001b[39;00m\n\u001b[0;32m   1252\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbackends\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mquantized\u001b[39;00m\n\u001b[1;32m-> 1253\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\n\u001b[0;32m   1254\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m \u001b[39mimport\u001b[39;00m __config__ \u001b[39mas\u001b[39;00m __config__\n\u001b[0;32m   1255\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m \u001b[39mimport\u001b[39;00m __future__ \u001b[39mas\u001b[39;00m __future__\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\utils\\data\\__init__.py:20\u001b[0m\n\u001b[0;32m      3\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39msampler\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m      4\u001b[0m     BatchSampler,\n\u001b[0;32m      5\u001b[0m     RandomSampler,\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m      9\u001b[0m     WeightedRandomSampler,\n\u001b[0;32m     10\u001b[0m )\n\u001b[0;32m     11\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdataset\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m     12\u001b[0m     ChainDataset,\n\u001b[0;32m     13\u001b[0m     ConcatDataset,\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m     18\u001b[0m     random_split,\n\u001b[0;32m     19\u001b[0m )\n\u001b[1;32m---> 20\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipes\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipe\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m     21\u001b[0m     DFIterDataPipe,\n\u001b[0;32m     22\u001b[0m     DataChunk,\n\u001b[0;32m     23\u001b[0m     IterDataPipe,\n\u001b[0;32m     24\u001b[0m     MapDataPipe,\n\u001b[0;32m     25\u001b[0m )\n\u001b[0;32m     26\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdataloader\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m     27\u001b[0m     DataLoader,\n\u001b[0;32m     28\u001b[0m     _DatasetKind,\n\u001b[1;32m   (...)\u001b[0m\n\u001b[0;32m     31\u001b[0m     default_convert,\n\u001b[0;32m     32\u001b[0m )\n\u001b[0;32m     33\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdistributed\u001b[39;00m \u001b[39mimport\u001b[39;00m DistributedSampler\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\utils\\data\\datapipes\\__init__.py:1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m \u001b[39mimport\u001b[39;00m \u001b[39miter\u001b[39m\n\u001b[0;32m      2\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m \u001b[39mimport\u001b[39;00m \u001b[39mmap\u001b[39m\n\u001b[0;32m      3\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m \u001b[39mimport\u001b[39;00m dataframe\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\utils\\data\\datapipes\\iter\\__init__.py:1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipes\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39miter\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m      2\u001b[0m     IterableWrapperIterDataPipe \u001b[39mas\u001b[39;00m IterableWrapper,\n\u001b[0;32m      3\u001b[0m )\n\u001b[0;32m      4\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipes\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39miter\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mcallable\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m      5\u001b[0m     CollatorIterDataPipe \u001b[39mas\u001b[39;00m Collator,\n\u001b[0;32m      6\u001b[0m     MapperIterDataPipe \u001b[39mas\u001b[39;00m Mapper,\n\u001b[0;32m      7\u001b[0m )\n\u001b[0;32m      8\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipes\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39miter\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mcombinatorics\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m      9\u001b[0m     SamplerIterDataPipe \u001b[39mas\u001b[39;00m Sampler,\n\u001b[0;32m     10\u001b[0m     ShufflerIterDataPipe \u001b[39mas\u001b[39;00m Shuffler,\n\u001b[0;32m     11\u001b[0m )\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\utils\\data\\datapipes\\iter\\utils.py:3\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mcopy\u001b[39;00m\n\u001b[0;32m      2\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mwarnings\u001b[39;00m\n\u001b[1;32m----> 3\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipes\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdatapipe\u001b[39;00m \u001b[39mimport\u001b[39;00m IterDataPipe\n\u001b[0;32m      5\u001b[0m __all__ \u001b[39m=\u001b[39m [\u001b[39m\"\u001b[39m\u001b[39mIterableWrapperIterDataPipe\u001b[39m\u001b[39m\"\u001b[39m, ]\n\u001b[0;32m      8\u001b[0m \u001b[39mclass\u001b[39;00m \u001b[39mIterableWrapperIterDataPipe\u001b[39;00m(IterDataPipe):\n",
      "File \u001b[1;32md:\\Anaconda\\envs\\rl2024\\lib\\site-packages\\torch\\utils\\data\\datapipes\\datapipe.py:15\u001b[0m\n\u001b[0;32m     12\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mtorch\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdata\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mdataset\u001b[39;00m \u001b[39mimport\u001b[39;00m Dataset, IterableDataset\n\u001b[0;32m     14\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m---> 15\u001b[0m     \u001b[39mimport\u001b[39;00m \u001b[39mdill\u001b[39;00m\n\u001b[0;32m     16\u001b[0m     \u001b[39m# XXX: By default, dill writes the Pickler dispatch table to inject its\u001b[39;00m\n\u001b[0;32m     17\u001b[0m     \u001b[39m# own logic there. This globally affects the behavior of the standard library\u001b[39;00m\n\u001b[0;32m     18\u001b[0m     \u001b[39m# pickler for any user who transitively depends on this module!\u001b[39;00m\n\u001b[0;32m     19\u001b[0m     \u001b[39m# Undo this extension to avoid altering the behavior of the pickler globally.\u001b[39;00m\n\u001b[0;32m     20\u001b[0m     dill\u001b[39m.\u001b[39mextend(use_dill\u001b[39m=\u001b[39m\u001b[39mFalse\u001b[39;00m)\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:1007\u001b[0m, in \u001b[0;36m_find_and_load\u001b[1;34m(name, import_)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:982\u001b[0m, in \u001b[0;36m_find_and_load_unlocked\u001b[1;34m(name, import_)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap>:925\u001b[0m, in \u001b[0;36m_find_spec\u001b[1;34m(name, path, target)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap_external>:1423\u001b[0m, in \u001b[0;36mfind_spec\u001b[1;34m(cls, fullname, path, target)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap_external>:1395\u001b[0m, in \u001b[0;36m_get_spec\u001b[1;34m(cls, fullname, path, target)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap_external>:1522\u001b[0m, in \u001b[0;36mfind_spec\u001b[1;34m(self, fullname, target)\u001b[0m\n",
      "File \u001b[1;32m<frozen importlib._bootstrap_external>:142\u001b[0m, in \u001b[0;36m_path_stat\u001b[1;34m(path)\u001b[0m\n",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "from comet_ml.integration.gymnasium import CometLogger  # 导入CometLogger,是用于记录gymnasium环境的日志的工具\n",
    "from stable_baselines3 import A2C   # 导入A2C算法,是用于强化学习的算法\n",
    "import gymnasium as gym #\n",
    "from comet_ml import Experiment\n",
    "\n",
    "\n",
    "experiment = Experiment(\n",
    "    api_key=\"mj37BQgq5eFQe1g3nuysfSaM4\",  # 替换为你的 Comet API Key\n",
    "    project_name=\"comet-example-gymnasium-notebook\"\n",
    ")\n",
    "\n",
    "\n",
    "env = gym.make(\"Acrobot-v1\", render_mode=\"rgb_array\")   # 创建一个Acrobot-v1环境,render_mode为rgb_array,表示渲染模式为rgb数组\n",
    "\n",
    "# 创建一个实验记录\n",
    "# Uncomment if you want to Upload Videos of your environment to Comet\n",
    "# env = gym.wrappers.RecordVideo(env, 'test')\n",
    "env = CometLogger(env, experiment)\n",
    "\n",
    "model = A2C(\"MlpPolicy\", env, verbose=0)    # 创建一个A2C模型,verbose=0表示不输出日志\n",
    "model.learn(total_timesteps=10000)  # 训练模型,total_timesteps=10000表示训练10000步\n",
    "\n",
    "env.close()\n",
    "experiment.end()\n",
    "experiment.display()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rl2024",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.17"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
