Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- docs/BENCHMARKS.md +139 -18
- docs/CHANGELOG.md +15 -0
- docs/PHASE5_OPS.md +650 -0
- docs/phase5_brax_comparison.md +446 -0
- docs/phase5_spec_research.md +273 -0
- docs/plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/G1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
- docs/plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png +3 -0
docs/BENCHMARKS.md
CHANGED
|
@@ -110,11 +110,12 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 110 |
| Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | CrossQ | Overall |
|
| 111 |
|-------|----------|------|-----------|-------|-----|----------|-----|-----|-----|--------|---------|
|
| 112 |
| 1 | Classic Control | 3 | ✅ | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Done |
|
| 113 |
-
| 2 | Box2D | 2 | N/A | N/A | ⚠️ | ✅ |
|
| 114 |
| 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | N/A | ⚠️ | ⚠️ | ⚠️ | Done |
|
| 115 |
-
| 4 | Atari | 57 | N/A | N/A | N/A | Skip | Done | Done | Done |
|
|
|
|
| 116 |
|
| 117 |
-
**Legend**: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable |
|
| 118 |
|
| 119 |
---
|
| 120 |
|
|
@@ -137,7 +138,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 137 |
| A2C | ✅ | 496.68 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_cartpole_arc | [a2c_gae_cartpole_arc_2026_02_11_142531](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_arc_2026_02_11_142531) |
|
| 138 |
| PPO | ✅ | 498.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_cartpole_arc | [ppo_cartpole_arc_2026_02_11_144029](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_arc_2026_02_11_144029) |
|
| 139 |
| SAC | ✅ | 406.09 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_cartpole_arc | [sac_cartpole_arc_2026_02_11_144155](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_arc_2026_02_11_144155) |
|
| 140 |
-
| CrossQ | ⚠️ |
|
| 141 |
|
| 142 |

|
| 143 |
|
|
@@ -166,7 +167,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 166 |
|
| 167 |
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|
| 168 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 169 |
-
| A2C |
|
| 170 |
| PPO | ✅ | -174.87 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_pendulum_arc | [ppo_pendulum_arc_2026_02_11_162156](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_arc_2026_02_11_162156) |
|
| 171 |
| SAC | ✅ | -150.97 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_pendulum_arc | [sac_pendulum_arc_2026_02_11_162240](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_arc_2026_02_11_162240) |
|
| 172 |
| CrossQ | ✅ | -145.66 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_pendulum | [crossq_pendulum_2026_02_28_130648](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_pendulum_2026_02_28_130648) |
|
|
@@ -185,10 +186,10 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 185 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 186 |
| DQN | ⚠️ | 195.21 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | dqn_concat_lunar_arc | [dqn_concat_lunar_arc_2026_02_11_201407](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/dqn_concat_lunar_arc_2026_02_11_201407) |
|
| 187 |
| DDQN+PER | ✅ | 265.90 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | ddqn_per_concat_lunar_arc | [ddqn_per_concat_lunar_arc_2026_02_13_105115](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ddqn_per_concat_lunar_arc_2026_02_13_105115) |
|
| 188 |
-
| A2C |
|
| 189 |
| PPO | ⚠️ | 183.30 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_arc | [ppo_lunar_arc_2026_02_11_201303](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_arc_2026_02_11_201303) |
|
| 190 |
| SAC | ⚠️ | 106.17 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_arc | [sac_lunar_arc_2026_02_11_201417](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_arc_2026_02_11_201417) |
|
| 191 |
-
| CrossQ |
|
| 192 |
|
| 193 |

|
| 194 |
|
|
@@ -200,7 +201,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20
|
|
| 200 |
|
| 201 |
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|
| 202 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 203 |
-
| A2C |
|
| 204 |
| PPO | ⚠️ | 132.58 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_continuous_arc | [ppo_lunar_continuous_arc_2026_02_11_224229](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_continuous_arc_2026_02_11_224229) |
|
| 205 |
| SAC | ⚠️ | 125.00 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_continuous_arc | [sac_lunar_continuous_arc_2026_02_12_222203](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_continuous_arc_2026_02_12_222203) |
|
| 206 |
| CrossQ | ✅ | 268.91 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar_continuous | [crossq_lunar_continuous_2026_03_01_140517](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_continuous_2026_03_01_140517) |
|
|
@@ -338,7 +339,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 338 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 339 |
| PPO | ✅ | 2661.26 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoid_2026_02_12_185439](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoid_2026_02_12_185439) |
|
| 340 |
| SAC | ✅ | 1989.65 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_arc | [sac_humanoid_arc_2026_02_12_020016](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_arc_2026_02_12_020016) |
|
| 341 |
-
| CrossQ | ✅ |
|
| 342 |
|
| 343 |

|
| 344 |
|
|
@@ -422,7 +423,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 422 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 423 |
| PPO | ✅ | 282.44 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_swimmer_arc | [ppo_swimmer_arc_swimmer_2026_02_12_100445](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_swimmer_arc_swimmer_2026_02_12_100445) |
|
| 424 |
| SAC | ✅ | 301.34 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_swimmer_arc | [sac_swimmer_arc_2026_02_12_054349](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_swimmer_arc_2026_02_12_054349) |
|
| 425 |
-
| CrossQ | ✅ | 221.12 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_swimmer | [
|
| 426 |
|
| 427 |

|
| 428 |
|
|
@@ -455,7 +456,7 @@ source .env && slm-lab run-remote --gpu \
|
|
| 455 |
- **A2C**: [a2c_atari_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_atari_arc.yaml) - RMSprop (lr=7e-4), training_frequency=32
|
| 456 |
- **PPO**: [ppo_atari_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_atari_arc.yaml) - AdamW (lr=2.5e-4), minibatch=256, horizon=128, epochs=4, max_frame=10e6
|
| 457 |
- **SAC**: [sac_atari_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_atari_arc.yaml) - Categorical SAC, AdamW (lr=3e-4), training_iter=3, training_frequency=4, max_frame=2e6
|
| 458 |
-
- **CrossQ**: [crossq_atari.yaml](../slm_lab/spec/benchmark/crossq/crossq_atari.yaml) - Categorical CrossQ,
|
| 459 |
|
| 460 |
**PPO Lambda Variants** (table shows best result per game):
|
| 461 |
|
|
@@ -486,7 +487,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 486 |
|
| 487 |
> **Note**: HF Data links marked "-" indicate runs completed but not yet uploaded to HuggingFace. Scores are extracted from local trial_metrics.
|
| 488 |
|
| 489 |
-
| ENV |
|
| 490 |
|-----|-------|-----------|---------|
|
| 491 |
| ALE/AirRaid-v5 | 7042.84 | ppo_atari_arc | [ppo_atari_arc_airraid_2026_02_13_124015](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_airraid_2026_02_13_124015) |
|
| 492 |
| | 1832.54 | sac_atari_arc | [sac_atari_arc_airraid_2026_02_17_104002](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_airraid_2026_02_17_104002) |
|
|
@@ -530,7 +531,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 530 |
| ALE/Breakout-v5 | 326.47 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_breakout_2026_02_13_230455](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_breakout_2026_02_13_230455) |
|
| 531 |
| | 20.23 | sac_atari_arc | [sac_atari_arc_breakout_2026_02_15_201235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_breakout_2026_02_15_201235) |
|
| 532 |
| | 273 | a2c_gae_atari_arc | [a2c_gae_atari_breakout_2026_01_31_213610](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_breakout_2026_01_31_213610) |
|
| 533 |
-
| |
|
| 534 |
| ALE/Carnival-v5 | 3912.59 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_carnival_2026_02_13_230438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_carnival_2026_02_13_230438) |
|
| 535 |
| | 3501.37 | sac_atari_arc | [sac_atari_arc_carnival_2026_02_17_105834](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_carnival_2026_02_17_105834) |
|
| 536 |
| | 2170 | a2c_gae_atari_arc | [a2c_gae_atari_carnival_2026_02_01_082726](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_carnival_2026_02_01_082726) |
|
|
@@ -594,7 +595,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 594 |
| ALE/MsPacman-v5 | 2330.74 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_mspacman_2026_02_14_102435](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_mspacman_2026_02_14_102435) |
|
| 595 |
| | 1336.96 | sac_atari_arc | [sac_atari_arc_mspacman_2026_02_17_221523](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_mspacman_2026_02_17_221523) |
|
| 596 |
| | 2110 | a2c_gae_atari_arc | [a2c_gae_atari_mspacman_2026_02_01_001100](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_mspacman_2026_02_01_001100) |
|
| 597 |
-
| |
|
| 598 |
| ALE/NameThisGame-v5 | 6879.23 | ppo_atari_arc | [ppo_atari_arc_namethisgame_2026_02_14_103319](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_namethisgame_2026_02_14_103319) |
|
| 599 |
| | 3992.71 | sac_atari_arc | [sac_atari_arc_namethisgame_2026_02_17_220905](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_namethisgame_2026_02_17_220905) |
|
| 600 |
| | 5412 | a2c_gae_atari_arc | [a2c_gae_atari_namethisgame_2026_02_01_132733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_namethisgame_2026_02_01_132733) |
|
|
@@ -604,14 +605,14 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 604 |
| ALE/Pong-v5 | 16.69 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_pong_2026_02_14_103722](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_pong_2026_02_14_103722) |
|
| 605 |
| | 10.89 | sac_atari_arc | [sac_atari_arc_pong_2026_02_17_160429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pong_2026_02_17_160429) |
|
| 606 |
| | 10.17 | a2c_gae_atari_arc | [a2c_gae_atari_pong_2026_01_31_213635](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pong_2026_01_31_213635) |
|
| 607 |
-
| |
|
| 608 |
| ALE/Pooyan-v5 | 5308.66 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_pooyan_2026_02_14_114730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_pooyan_2026_02_14_114730) |
|
| 609 |
| | 2530.78 | sac_atari_arc | [sac_atari_arc_pooyan_2026_02_17_220346](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pooyan_2026_02_17_220346) |
|
| 610 |
| | 2997 | a2c_gae_atari_arc | [a2c_gae_atari_pooyan_2026_02_01_132748](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pooyan_2026_02_01_132748) |
|
| 611 |
| ALE/Qbert-v5 | 15460.48 | ppo_atari_arc | [ppo_atari_arc_qbert_2026_02_14_120409](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_qbert_2026_02_14_120409) |
|
| 612 |
| | 3331.98 | sac_atari_arc | [sac_atari_arc_qbert_2026_02_17_223117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_qbert_2026_02_17_223117) |
|
| 613 |
| | 12619 | a2c_gae_atari_arc | [a2c_gae_atari_qbert_2026_01_31_213720](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_qbert_2026_01_31_213720) |
|
| 614 |
-
| |
|
| 615 |
| ALE/Riverraid-v5 | 9599.75 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_riverraid_2026_02_14_124700](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_riverraid_2026_02_14_124700) |
|
| 616 |
| | 4744.95 | sac_atari_arc | [sac_atari_arc_riverraid_2026_02_18_014310](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_riverraid_2026_02_18_014310) |
|
| 617 |
| | 6558 | a2c_gae_atari_arc | [a2c_gae_atari_riverraid_2026_02_01_132507](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_riverraid_2026_02_01_132507) |
|
|
@@ -624,7 +625,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 624 |
| ALE/Seaquest-v5 | 1775.14 | ppo_atari_arc | [ppo_atari_arc_seaquest_2026_02_11_095444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_seaquest_2026_02_11_095444) |
|
| 625 |
| | 1565.44 | sac_atari_arc | [sac_atari_arc_seaquest_2026_02_18_020822](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_seaquest_2026_02_18_020822) |
|
| 626 |
| | 850 | a2c_gae_atari_arc | [a2c_gae_atari_seaquest_2026_02_01_001001](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_seaquest_2026_02_01_001001) |
|
| 627 |
-
| |
|
| 628 |
| ALE/Skiing-v5 | -28217.28 | ppo_atari_arc | [ppo_atari_arc_skiing_2026_02_14_174807](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_skiing_2026_02_14_174807) |
|
| 629 |
| | -17464.22 | sac_atari_arc | [sac_atari_arc_skiing_2026_02_18_024444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_skiing_2026_02_18_024444) |
|
| 630 |
| | -14235 | a2c_gae_atari_arc | [a2c_gae_atari_skiing_2026_02_01_132451](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_skiing_2026_02_01_132451) |
|
|
@@ -634,7 +635,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 634 |
| ALE/SpaceInvaders-v5 | 892.49 | ppo_atari_arc | [ppo_atari_arc_spaceinvaders_2026_02_14_131114](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_spaceinvaders_2026_02_14_131114) |
|
| 635 |
| | 507.33 | sac_atari_arc | [sac_atari_arc_spaceinvaders_2026_02_18_033139](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_spaceinvaders_2026_02_18_033139) |
|
| 636 |
| | 784 | a2c_gae_atari_arc | [a2c_gae_atari_spaceinvaders_2026_02_01_000950](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_spaceinvaders_2026_02_01_000950) |
|
| 637 |
-
| |
|
| 638 |
| ALE/StarGunner-v5 | 49328.73 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_stargunner_2026_02_14_131149](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_stargunner_2026_02_14_131149) |
|
| 639 |
| | 4295.97 | sac_atari_arc | [sac_atari_arc_stargunner_2026_02_18_033151](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_stargunner_2026_02_18_033151) |
|
| 640 |
| | 8665 | a2c_gae_atari_arc | [a2c_gae_atari_stargunner_2026_02_01_132406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_stargunner_2026_02_01_132406) |
|
|
@@ -760,3 +761,123 @@ source .env && slm-lab run-remote --gpu -s env=ENV \
|
|
| 760 |
|
| 761 |
</details>
|
| 762 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
| Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | CrossQ | Overall |
|
| 111 |
|-------|----------|------|-----------|-------|-----|----------|-----|-----|-----|--------|---------|
|
| 112 |
| 1 | Classic Control | 3 | ✅ | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Done |
|
| 113 |
+
| 2 | Box2D | 2 | N/A | N/A | ⚠️ | ✅ | | ⚠️ | ⚠️ | ⚠️ | Done |
|
| 114 |
| 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | N/A | ⚠️ | ⚠️ | ⚠️ | Done |
|
| 115 |
+
| 4 | Atari | 57 | N/A | N/A | N/A | Skip | Done | Done | Done | | Done |
|
| 116 |
+
| 5 | Playground | 54 | N/A | N/A | N/A | N/A | N/A | 🔄 | 🔄 | N/A | In progress |
|
| 117 |
|
| 118 |
+
**Legend**: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable
|
| 119 |
|
| 120 |
---
|
| 121 |
|
|
|
|
| 138 |
| A2C | ✅ | 496.68 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_cartpole_arc | [a2c_gae_cartpole_arc_2026_02_11_142531](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_arc_2026_02_11_142531) |
|
| 139 |
| PPO | ✅ | 498.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_cartpole_arc | [ppo_cartpole_arc_2026_02_11_144029](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_arc_2026_02_11_144029) |
|
| 140 |
| SAC | ✅ | 406.09 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_cartpole_arc | [sac_cartpole_arc_2026_02_11_144155](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_arc_2026_02_11_144155) |
|
| 141 |
+
| CrossQ | ⚠️ | 334.59 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_cartpole | [crossq_cartpole_2026_03_02_100434](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_cartpole_2026_03_02_100434) |
|
| 142 |
|
| 143 |

|
| 144 |
|
|
|
|
| 167 |
|
| 168 |
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|
| 169 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 170 |
+
| A2C | | -820.74 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_pendulum_arc | [a2c_gae_pendulum_arc_2026_02_11_162217](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_arc_2026_02_11_162217) |
|
| 171 |
| PPO | ✅ | -174.87 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_pendulum_arc | [ppo_pendulum_arc_2026_02_11_162156](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_arc_2026_02_11_162156) |
|
| 172 |
| SAC | ✅ | -150.97 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_pendulum_arc | [sac_pendulum_arc_2026_02_11_162240](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_arc_2026_02_11_162240) |
|
| 173 |
| CrossQ | ✅ | -145.66 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_pendulum | [crossq_pendulum_2026_02_28_130648](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_pendulum_2026_02_28_130648) |
|
|
|
|
| 186 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 187 |
| DQN | ⚠️ | 195.21 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | dqn_concat_lunar_arc | [dqn_concat_lunar_arc_2026_02_11_201407](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/dqn_concat_lunar_arc_2026_02_11_201407) |
|
| 188 |
| DDQN+PER | ✅ | 265.90 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | ddqn_per_concat_lunar_arc | [ddqn_per_concat_lunar_arc_2026_02_13_105115](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ddqn_per_concat_lunar_arc_2026_02_13_105115) |
|
| 189 |
+
| A2C | | 27.38 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_arc | [a2c_gae_lunar_arc_2026_02_11_224304](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_arc_2026_02_11_224304) |
|
| 190 |
| PPO | ⚠️ | 183.30 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_arc | [ppo_lunar_arc_2026_02_11_201303](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_arc_2026_02_11_201303) |
|
| 191 |
| SAC | ⚠️ | 106.17 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_arc | [sac_lunar_arc_2026_02_11_201417](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_arc_2026_02_11_201417) |
|
| 192 |
+
| CrossQ | | 139.21 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar | [crossq_lunar_2026_02_28_130733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_2026_02_28_130733) |
|
| 193 |
|
| 194 |

|
| 195 |
|
|
|
|
| 201 |
|
| 202 |
| Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data |
|
| 203 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 204 |
+
| A2C | | -76.81 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_continuous_arc | [a2c_gae_lunar_continuous_arc_2026_02_11_224301](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_continuous_arc_2026_02_11_224301) |
|
| 205 |
| PPO | ⚠️ | 132.58 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_continuous_arc | [ppo_lunar_continuous_arc_2026_02_11_224229](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_continuous_arc_2026_02_11_224229) |
|
| 206 |
| SAC | ⚠️ | 125.00 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_continuous_arc | [sac_lunar_continuous_arc_2026_02_12_222203](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_continuous_arc_2026_02_12_222203) |
|
| 207 |
| CrossQ | ✅ | 268.91 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar_continuous | [crossq_lunar_continuous_2026_03_01_140517](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_continuous_2026_03_01_140517) |
|
|
|
|
| 339 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 340 |
| PPO | ✅ | 2661.26 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoid_2026_02_12_185439](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoid_2026_02_12_185439) |
|
| 341 |
| SAC | ✅ | 1989.65 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_arc | [sac_humanoid_arc_2026_02_12_020016](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_arc_2026_02_12_020016) |
|
| 342 |
+
| CrossQ | ✅ | 1755.29 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_humanoid | [crossq_humanoid_2026_03_01_165208](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_humanoid_2026_03_01_165208) |
|
| 343 |
|
| 344 |

|
| 345 |
|
|
|
|
| 423 |
|-----------|--------|-----|-----------|-----------|---------|
|
| 424 |
| PPO | ✅ | 282.44 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_swimmer_arc | [ppo_swimmer_arc_swimmer_2026_02_12_100445](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_swimmer_arc_swimmer_2026_02_12_100445) |
|
| 425 |
| SAC | ✅ | 301.34 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_swimmer_arc | [sac_swimmer_arc_2026_02_12_054349](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_swimmer_arc_2026_02_12_054349) |
|
| 426 |
+
| CrossQ | ✅ | 221.12 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_swimmer | [crossq_swimmer_2026_02_21_184204](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_swimmer_2026_02_21_184204) |
|
| 427 |
|
| 428 |

|
| 429 |
|
|
|
|
| 456 |
- **A2C**: [a2c_atari_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_atari_arc.yaml) - RMSprop (lr=7e-4), training_frequency=32
|
| 457 |
- **PPO**: [ppo_atari_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_atari_arc.yaml) - AdamW (lr=2.5e-4), minibatch=256, horizon=128, epochs=4, max_frame=10e6
|
| 458 |
- **SAC**: [sac_atari_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_atari_arc.yaml) - Categorical SAC, AdamW (lr=3e-4), training_iter=3, training_frequency=4, max_frame=2e6
|
| 459 |
+
- **CrossQ**: [crossq_atari.yaml](../slm_lab/spec/benchmark/crossq/crossq_atari.yaml) - Categorical CrossQ, Adam (lr=1e-3), training_iter=1, training_frequency=4, max_frame=2e6 (experimental — limited results on 6 games)
|
| 460 |
|
| 461 |
**PPO Lambda Variants** (table shows best result per game):
|
| 462 |
|
|
|
|
| 487 |
|
| 488 |
> **Note**: HF Data links marked "-" indicate runs completed but not yet uploaded to HuggingFace. Scores are extracted from local trial_metrics.
|
| 489 |
|
| 490 |
+
| ENV | MA | SPEC_NAME | HF Data |
|
| 491 |
|-----|-------|-----------|---------|
|
| 492 |
| ALE/AirRaid-v5 | 7042.84 | ppo_atari_arc | [ppo_atari_arc_airraid_2026_02_13_124015](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_airraid_2026_02_13_124015) |
|
| 493 |
| | 1832.54 | sac_atari_arc | [sac_atari_arc_airraid_2026_02_17_104002](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_airraid_2026_02_17_104002) |
|
|
|
|
| 531 |
| ALE/Breakout-v5 | 326.47 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_breakout_2026_02_13_230455](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_breakout_2026_02_13_230455) |
|
| 532 |
| | 20.23 | sac_atari_arc | [sac_atari_arc_breakout_2026_02_15_201235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_breakout_2026_02_15_201235) |
|
| 533 |
| | 273 | a2c_gae_atari_arc | [a2c_gae_atari_breakout_2026_01_31_213610](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_breakout_2026_01_31_213610) |
|
| 534 |
+
| | 4.40 | crossq_atari | [crossq_atari_breakout_2026_02_25_030241](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_breakout_2026_02_25_030241) |
|
| 535 |
| ALE/Carnival-v5 | 3912.59 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_carnival_2026_02_13_230438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_carnival_2026_02_13_230438) |
|
| 536 |
| | 3501.37 | sac_atari_arc | [sac_atari_arc_carnival_2026_02_17_105834](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_carnival_2026_02_17_105834) |
|
| 537 |
| | 2170 | a2c_gae_atari_arc | [a2c_gae_atari_carnival_2026_02_01_082726](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_carnival_2026_02_01_082726) |
|
|
|
|
| 595 |
| ALE/MsPacman-v5 | 2330.74 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_mspacman_2026_02_14_102435](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_mspacman_2026_02_14_102435) |
|
| 596 |
| | 1336.96 | sac_atari_arc | [sac_atari_arc_mspacman_2026_02_17_221523](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_mspacman_2026_02_17_221523) |
|
| 597 |
| | 2110 | a2c_gae_atari_arc | [a2c_gae_atari_mspacman_2026_02_01_001100](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_mspacman_2026_02_01_001100) |
|
| 598 |
+
| | 327.79 | crossq_atari | [crossq_atari_mspacman_2026_02_23_171317](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_mspacman_2026_02_23_171317) |
|
| 599 |
| ALE/NameThisGame-v5 | 6879.23 | ppo_atari_arc | [ppo_atari_arc_namethisgame_2026_02_14_103319](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_namethisgame_2026_02_14_103319) |
|
| 600 |
| | 3992.71 | sac_atari_arc | [sac_atari_arc_namethisgame_2026_02_17_220905](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_namethisgame_2026_02_17_220905) |
|
| 601 |
| | 5412 | a2c_gae_atari_arc | [a2c_gae_atari_namethisgame_2026_02_01_132733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_namethisgame_2026_02_01_132733) |
|
|
|
|
| 605 |
| ALE/Pong-v5 | 16.69 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_pong_2026_02_14_103722](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_pong_2026_02_14_103722) |
|
| 606 |
| | 10.89 | sac_atari_arc | [sac_atari_arc_pong_2026_02_17_160429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pong_2026_02_17_160429) |
|
| 607 |
| | 10.17 | a2c_gae_atari_arc | [a2c_gae_atari_pong_2026_01_31_213635](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pong_2026_01_31_213635) |
|
| 608 |
+
| | -20.59 | crossq_atari | [crossq_atari_pong_2026_02_23_171158](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_pong_2026_02_23_171158) |
|
| 609 |
| ALE/Pooyan-v5 | 5308.66 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_pooyan_2026_02_14_114730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_pooyan_2026_02_14_114730) |
|
| 610 |
| | 2530.78 | sac_atari_arc | [sac_atari_arc_pooyan_2026_02_17_220346](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pooyan_2026_02_17_220346) |
|
| 611 |
| | 2997 | a2c_gae_atari_arc | [a2c_gae_atari_pooyan_2026_02_01_132748](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pooyan_2026_02_01_132748) |
|
| 612 |
| ALE/Qbert-v5 | 15460.48 | ppo_atari_arc | [ppo_atari_arc_qbert_2026_02_14_120409](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_qbert_2026_02_14_120409) |
|
| 613 |
| | 3331.98 | sac_atari_arc | [sac_atari_arc_qbert_2026_02_17_223117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_qbert_2026_02_17_223117) |
|
| 614 |
| | 12619 | a2c_gae_atari_arc | [a2c_gae_atari_qbert_2026_01_31_213720](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_qbert_2026_01_31_213720) |
|
| 615 |
+
| | 3189.73 | crossq_atari | [crossq_atari_qbert_2026_02_25_030458](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_qbert_2026_02_25_030458) |
|
| 616 |
| ALE/Riverraid-v5 | 9599.75 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_riverraid_2026_02_14_124700](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_riverraid_2026_02_14_124700) |
|
| 617 |
| | 4744.95 | sac_atari_arc | [sac_atari_arc_riverraid_2026_02_18_014310](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_riverraid_2026_02_18_014310) |
|
| 618 |
| | 6558 | a2c_gae_atari_arc | [a2c_gae_atari_riverraid_2026_02_01_132507](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_riverraid_2026_02_01_132507) |
|
|
|
|
| 625 |
| ALE/Seaquest-v5 | 1775.14 | ppo_atari_arc | [ppo_atari_arc_seaquest_2026_02_11_095444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_seaquest_2026_02_11_095444) |
|
| 626 |
| | 1565.44 | sac_atari_arc | [sac_atari_arc_seaquest_2026_02_18_020822](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_seaquest_2026_02_18_020822) |
|
| 627 |
| | 850 | a2c_gae_atari_arc | [a2c_gae_atari_seaquest_2026_02_01_001001](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_seaquest_2026_02_01_001001) |
|
| 628 |
+
| | 234.63 | crossq_atari | [crossq_atari_seaquest_2026_02_25_030441](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_seaquest_2026_02_25_030441) |
|
| 629 |
| ALE/Skiing-v5 | -28217.28 | ppo_atari_arc | [ppo_atari_arc_skiing_2026_02_14_174807](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_skiing_2026_02_14_174807) |
|
| 630 |
| | -17464.22 | sac_atari_arc | [sac_atari_arc_skiing_2026_02_18_024444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_skiing_2026_02_18_024444) |
|
| 631 |
| | -14235 | a2c_gae_atari_arc | [a2c_gae_atari_skiing_2026_02_01_132451](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_skiing_2026_02_01_132451) |
|
|
|
|
| 635 |
| ALE/SpaceInvaders-v5 | 892.49 | ppo_atari_arc | [ppo_atari_arc_spaceinvaders_2026_02_14_131114](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_spaceinvaders_2026_02_14_131114) |
|
| 636 |
| | 507.33 | sac_atari_arc | [sac_atari_arc_spaceinvaders_2026_02_18_033139](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_spaceinvaders_2026_02_18_033139) |
|
| 637 |
| | 784 | a2c_gae_atari_arc | [a2c_gae_atari_spaceinvaders_2026_02_01_000950](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_spaceinvaders_2026_02_01_000950) |
|
| 638 |
+
| | 404.50 | crossq_atari | [crossq_atari_spaceinvaders_2026_02_25_030410](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_spaceinvaders_2026_02_25_030410) |
|
| 639 |
| ALE/StarGunner-v5 | 49328.73 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_stargunner_2026_02_14_131149](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_stargunner_2026_02_14_131149) |
|
| 640 |
| | 4295.97 | sac_atari_arc | [sac_atari_arc_stargunner_2026_02_18_033151](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_stargunner_2026_02_18_033151) |
|
| 641 |
| | 8665 | a2c_gae_atari_arc | [a2c_gae_atari_stargunner_2026_02_01_132406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_stargunner_2026_02_01_132406) |
|
|
|
|
| 761 |
|
| 762 |
</details>
|
| 763 |
|
| 764 |
+
---
|
| 765 |
+
|
| 766 |
+
### Phase 5: MuJoCo Playground (JAX/MJX GPU-Accelerated)
|
| 767 |
+
|
| 768 |
+
[MuJoCo Playground](https://google-deepmind.github.io/mujoco_playground/) | Continuous state/action | MJWarp GPU backend
|
| 769 |
+
|
| 770 |
+
**Settings**: max_frame 100M | num_envs 2048 | max_session 4
|
| 771 |
+
|
| 772 |
+
**Spec file**: [ppo_playground.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml) — all envs via `-s env=playground/ENV`
|
| 773 |
+
|
| 774 |
+
**Reproduce**:
|
| 775 |
+
```bash
|
| 776 |
+
source .env && slm-lab run-remote --gpu \
|
| 777 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml SPEC_NAME train \
|
| 778 |
+
-s env=playground/ENV -s max_frame=100000000 -n NAME
|
| 779 |
+
```
|
| 780 |
+
|
| 781 |
+
#### Phase 5.1: DM Control Suite (25 envs)
|
| 782 |
+
|
| 783 |
+
Classic control and locomotion tasks from the DeepMind Control Suite, ported to MJWarp GPU simulation.
|
| 784 |
+
|
| 785 |
+
| ENV | MA | SPEC_NAME | HF Data |
|
| 786 |
+
|-----|-----|-----------|---------|
|
| 787 |
+
| playground/AcrobotSwingup | 253.24 | ppo_playground_vnorm | [ppo_playground_acrobotswingup_2026_03_12_175809](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_acrobotswingup_2026_03_12_175809) |
|
| 788 |
+
| playground/AcrobotSwingupSparse | 146.98 | ppo_playground_vnorm | [ppo_playground_vnorm_acrobotswingupsparse_2026_03_14_161212](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_acrobotswingupsparse_2026_03_14_161212) |
|
| 789 |
+
| playground/BallInCup | 942.44 | ppo_playground_vnorm | [ppo_playground_ballincup_2026_03_12_105443](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_ballincup_2026_03_12_105443) |
|
| 790 |
+
| playground/CartpoleBalance | 968.23 | ppo_playground_vnorm | [ppo_playground_cartpolebalance_2026_03_12_141924](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_cartpolebalance_2026_03_12_141924) |
|
| 791 |
+
| playground/CartpoleBalanceSparse | 995.34 | ppo_playground_constlr | [ppo_playground_constlr_cartpolebalancesparse_2026_03_14_000352](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpolebalancesparse_2026_03_14_000352) |
|
| 792 |
+
| playground/CartpoleSwingup | 729.09 | ppo_playground_constlr | [ppo_playground_constlr_cartpoleswingup_2026_03_17_041102](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpoleswingup_2026_03_17_041102) |
|
| 793 |
+
| playground/CartpoleSwingupSparse | 521.98 | ppo_playground_constlr | [ppo_playground_constlr_cartpoleswingupsparse_2026_03_13_233449](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpoleswingupsparse_2026_03_13_233449) |
|
| 794 |
+
| playground/CheetahRun | 883.44 | ppo_playground_vnorm | [ppo_playground_vnorm_cheetahrun_2026_03_14_161211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_cheetahrun_2026_03_14_161211) |
|
| 795 |
+
| playground/FingerSpin | 713.35 | ppo_playground_fingerspin | [ppo_playground_fingerspin_fingerspin_2026_03_13_033911](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_fingerspin_fingerspin_2026_03_13_033911) |
|
| 796 |
+
| playground/FingerTurnEasy | 663.58 | ppo_playground_vnorm | [ppo_playground_fingerturneasy_2026_03_12_175835](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_fingerturneasy_2026_03_12_175835) |
|
| 797 |
+
| playground/FingerTurnHard | 590.43 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_fingerturnhard_2026_03_16_234509](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_fingerturnhard_2026_03_16_234509) |
|
| 798 |
+
| playground/FishSwim | 580.57 | ppo_playground_vnorm_constlr_clip03 | [ppo_playground_vnorm_constlr_clip03_fishswim_2026_03_14_002112](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_clip03_fishswim_2026_03_14_002112) |
|
| 799 |
+
| playground/HopperHop | 22.00 | ppo_playground_vnorm | [ppo_playground_hopperhop_2026_03_12_110855](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_hopperhop_2026_03_12_110855) |
|
| 800 |
+
| playground/HopperStand | 237.15 | ppo_playground_vnorm | [ppo_playground_vnorm_hopperstand_2026_03_14_095438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_hopperstand_2026_03_14_095438) |
|
| 801 |
+
| playground/HumanoidRun | 18.83 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidrun_2026_03_14_115522](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidrun_2026_03_14_115522) |
|
| 802 |
+
| playground/HumanoidStand | 114.86 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidstand_2026_03_14_115516](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidstand_2026_03_14_115516) |
|
| 803 |
+
| playground/HumanoidWalk | 47.01 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidwalk_2026_03_14_172235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidwalk_2026_03_14_172235) |
|
| 804 |
+
| playground/PendulumSwingup | 637.46 | ppo_playground_pendulum | [ppo_playground_pendulum_pendulumswingup_2026_03_13_033818](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_pendulum_pendulumswingup_2026_03_13_033818) |
|
| 805 |
+
| playground/PointMass | 868.09 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_pointmass_2026_03_14_095452](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_pointmass_2026_03_14_095452) |
|
| 806 |
+
| playground/ReacherEasy | 955.08 | ppo_playground_vnorm | [ppo_playground_reachereasy_2026_03_12_122115](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_reachereasy_2026_03_12_122115) |
|
| 807 |
+
| playground/ReacherHard | 946.99 | ppo_playground_vnorm | [ppo_playground_reacherhard_2026_03_12_123226](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_reacherhard_2026_03_12_123226) |
|
| 808 |
+
| playground/SwimmerSwimmer6 | 591.13 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_swimmerswimmer6_2026_03_14_000406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_swimmerswimmer6_2026_03_14_000406) |
|
| 809 |
+
| playground/WalkerRun | 759.71 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerrun_2026_03_14_161354](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerrun_2026_03_14_161354) |
|
| 810 |
+
| playground/WalkerStand | 948.35 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerstand_2026_03_14_161415](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerstand_2026_03_14_161415) |
|
| 811 |
+
| playground/WalkerWalk | 945.31 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerwalk_2026_03_14_161338](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerwalk_2026_03_14_161338) |
|
| 812 |
+
|
| 813 |
+
| | | |
|
| 814 |
+
|---|---|---|
|
| 815 |
+
|  |  |  |
|
| 816 |
+
|  |  |  |
|
| 817 |
+
|  |  |  |
|
| 818 |
+
|  |  |  |
|
| 819 |
+
|  |  |  |
|
| 820 |
+
|  |  |  |
|
| 821 |
+
|  |  |  |
|
| 822 |
+
|  |  |  |
|
| 823 |
+
|  | | |
|
| 824 |
+
|
| 825 |
+
#### Phase 5.2: Locomotion Robots (19 envs)
|
| 826 |
+
|
| 827 |
+
Real-world robot locomotion — quadrupeds (Go1, Spot, Barkour) and humanoids (H1, G1, T1, Op3, Apollo, BerkeleyHumanoid) on flat and rough terrain.
|
| 828 |
+
|
| 829 |
+
| ENV | MA | SPEC_NAME | HF Data |
|
| 830 |
+
|-----|-----|-----------|---------|
|
| 831 |
+
| playground/ApolloJoystickFlatTerrain | 17.44 | ppo_playground_loco_precise | [ppo_playground_loco_precise_apollojoystickflatterrain_2026_03_14_210939](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_apollojoystickflatterrain_2026_03_14_210939) |
|
| 832 |
+
| playground/BarkourJoystick | 0.0 | ppo_playground_loco | [ppo_playground_loco_barkourjoystick_2026_03_14_194525](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_barkourjoystick_2026_03_14_194525) |
|
| 833 |
+
| playground/BerkeleyHumanoidJoystickFlatTerrain | 32.29 | ppo_playground_loco_precise | [ppo_playground_loco_precise_berkeleyhumanoidjoystickflatterrain_2026_03_14_213019](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_berkeleyhumanoidjoystickflatterrain_2026_03_14_213019) |
|
| 834 |
+
| playground/BerkeleyHumanoidJoystickRoughTerrain | 21.25 | ppo_playground_loco_precise | [ppo_playground_loco_precise_berkeleyhumanoidjoystickroughterrain_2026_03_15_150211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_berkeleyhumanoidjoystickroughterrain_2026_03_15_150211) |
|
| 835 |
+
| playground/G1JoystickFlatTerrain | 1.85 | ppo_playground_loco_precise | [ppo_playground_loco_precise_g1joystickflatterrain_2026_03_15_150219](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_g1joystickflatterrain_2026_03_15_150219) |
|
| 836 |
+
| playground/G1JoystickRoughTerrain | -2.75 | ppo_playground_loco_precise | [ppo_playground_loco_precise_g1joystickroughterrain_2026_03_19_015137](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_g1joystickroughterrain_2026_03_19_015137) |
|
| 837 |
+
| playground/Go1Footstand | 23.48 | ppo_playground_loco_precise | [ppo_playground_loco_precise_go1footstand_2026_03_16_174009](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_go1footstand_2026_03_16_174009) |
|
| 838 |
+
| playground/Go1Getup | 18.16 | ppo_playground_loco_go1 | [ppo_playground_loco_go1_go1getup_2026_03_16_132801](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1_go1getup_2026_03_16_132801) |
|
| 839 |
+
| playground/Go1Handstand | 17.88 | ppo_playground_loco_precise | [ppo_playground_loco_precise_go1handstand_2026_03_16_155437](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_go1handstand_2026_03_16_155437) |
|
| 840 |
+
| playground/Go1JoystickFlatTerrain | 0.0 | ppo_playground_loco | [ppo_playground_loco_go1joystickflatterrain_2026_03_14_204658](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1joystickflatterrain_2026_03_14_204658) |
|
| 841 |
+
| playground/Go1JoystickRoughTerrain | 0.00 | ppo_playground_loco | [ppo_playground_loco_go1joystickroughterrain_2026_03_15_150321](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1joystickroughterrain_2026_03_15_150321) |
|
| 842 |
+
| playground/H1InplaceGaitTracking | 11.95 | ppo_playground_loco_precise | [ppo_playground_loco_precise_h1inplacegaittracking_2026_03_16_170327](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_h1inplacegaittracking_2026_03_16_170327) |
|
| 843 |
+
| playground/H1JoystickGaitTracking | 31.11 | ppo_playground_loco_precise | [ppo_playground_loco_precise_h1joystickgaittracking_2026_03_16_170412](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_h1joystickgaittracking_2026_03_16_170412) |
|
| 844 |
+
| playground/Op3Joystick | 0.00 | ppo_playground_loco | [ppo_playground_loco_op3joystick_2026_03_15_150120](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_op3joystick_2026_03_15_150120) |
|
| 845 |
+
| playground/SpotFlatTerrainJoystick | 48.58 | ppo_playground_loco_precise | [ppo_playground_loco_precise_spotflatterrainjoystick_2026_03_16_180747](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_spotflatterrainjoystick_2026_03_16_180747) |
|
| 846 |
+
| playground/SpotGetup | 19.39 | ppo_playground_loco | [ppo_playground_loco_spotgetup_2026_03_14_213703](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_spotgetup_2026_03_14_213703) |
|
| 847 |
+
| playground/SpotJoystickGaitTracking | 36.90 | ppo_playground_loco | [ppo_playground_loco_spotjoystickgaittracking_2026_03_19_015106](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_spotjoystickgaittracking_2026_03_19_015106) |
|
| 848 |
+
| playground/T1JoystickFlatTerrain | 13.42 | ppo_playground_loco_precise | [ppo_playground_loco_precise_t1joystickflatterrain_2026_03_14_220250](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_t1joystickflatterrain_2026_03_14_220250) |
|
| 849 |
+
| playground/T1JoystickRoughTerrain | 2.58 | ppo_playground_loco_precise | [ppo_playground_loco_precise_t1joystickroughterrain_2026_03_15_162332](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_t1joystickroughterrain_2026_03_15_162332) |
|
| 850 |
+
|
| 851 |
+
| | | |
|
| 852 |
+
|---|---|---|
|
| 853 |
+
|  |  |  |
|
| 854 |
+
|  |  |  |
|
| 855 |
+
|  |  |  |
|
| 856 |
+
|  |  |  |
|
| 857 |
+
|  |  |  |
|
| 858 |
+
|  |  |  |
|
| 859 |
+
|
| 860 |
+
#### Phase 5.3: Manipulation (10 envs)
|
| 861 |
+
|
| 862 |
+
Robotic manipulation — Panda arm pick/place, Aloha bimanual, Leap dexterous hand, and AeroCube orientation tasks.
|
| 863 |
+
|
| 864 |
+
| ENV | MA | SPEC_NAME | HF Data |
|
| 865 |
+
|-----|-----|-----------|---------|
|
| 866 |
+
| playground/AeroCubeRotateZAxis | -3.09 | ppo_playground_loco | [ppo_playground_loco_aerocuberotatezaxis_2026_03_20_012502](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_aerocuberotatezaxis_2026_03_20_012502) |
|
| 867 |
+
| playground/AlohaHandOver | 3.65 | ppo_playground_loco | [ppo_playground_loco_alohahandover_2026_03_15_023712](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_alohahandover_2026_03_15_023712) |
|
| 868 |
+
| playground/AlohaSinglePegInsertion | 220.93 | ppo_playground_manip_aloha_peg | [ppo_playground_manip_aloha_peg_alohasinglepeginsertion_2026_03_17_122613](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_manip_aloha_peg_alohasinglepeginsertion_2026_03_17_122613) |
|
| 869 |
+
| playground/LeapCubeReorient | 74.68 | ppo_playground_loco | [ppo_playground_loco_leapcubereorient_2026_03_15_150420](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_leapcubereorient_2026_03_15_150420) |
|
| 870 |
+
| playground/LeapCubeRotateZAxis | 91.65 | ppo_playground_loco | [ppo_playground_loco_leapcuberotatezaxis_2026_03_15_150334](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_leapcuberotatezaxis_2026_03_15_150334) |
|
| 871 |
+
| playground/PandaOpenCabinet | 11081.51 | ppo_playground_loco | [ppo_playground_loco_pandaopencabinet_2026_03_15_150318](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandaopencabinet_2026_03_15_150318) |
|
| 872 |
+
| playground/PandaPickCube | 4586.13 | ppo_playground_loco | [ppo_playground_loco_pandapickcube_2026_03_15_023744](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcube_2026_03_15_023744) |
|
| 873 |
+
| playground/PandaPickCubeCartesian | 10.58 | ppo_playground_loco | [ppo_playground_loco_pandapickcubecartesian_2026_03_15_023810](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcubecartesian_2026_03_15_023810) |
|
| 874 |
+
| playground/PandaPickCubeOrientation | 4281.66 | ppo_playground_loco | [ppo_playground_loco_pandapickcubeorientation_2026_03_19_015108](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcubeorientation_2026_03_19_015108) |
|
| 875 |
+
| playground/PandaRobotiqPushCube | 1.31 | ppo_playground_loco | [ppo_playground_loco_pandarobotiqpushcube_2026_03_15_042131](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandarobotiqpushcube_2026_03_15_042131) |
|
| 876 |
+
|
| 877 |
+
| | | |
|
| 878 |
+
|---|---|---|
|
| 879 |
+
|  |  |  |
|
| 880 |
+
|  |  |  |
|
| 881 |
+
|  |  |  |
|
| 882 |
+
|  | | |
|
| 883 |
+
|
docs/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# SLM-Lab v5.2.0
|
| 2 |
|
| 3 |
Training path performance optimization. **+15% SAC throughput on GPU**, verified with no score regression.
|
|
|
|
| 1 |
+
# SLM-Lab v5.3.0
|
| 2 |
+
|
| 3 |
+
MuJoCo Playground integration. 54 GPU-accelerated environments via JAX/MJX backend.
|
| 4 |
+
|
| 5 |
+
**What changed:**
|
| 6 |
+
- **New env backend**: MuJoCo Playground (DeepMind) — 25 DM Control Suite, 19 Locomotion (Go1, Spot, H1, G1), 10 Manipulation (Panda, ALOHA, LEAP)
|
| 7 |
+
- **PlaygroundVecEnv**: JAX-native vectorized env wrapper with `jax.vmap` batching and Brax auto-reset. Converts JAX arrays to numpy at the API boundary for PyTorch compatibility
|
| 8 |
+
- **Prefix routing**: `playground/EnvName` in specs routes to PlaygroundVecEnv instead of Gymnasium
|
| 9 |
+
- **Optional dependency**: `uv sync --group playground` installs `mujoco-playground`, `jax`, `brax`
|
| 10 |
+
- **Benchmark specs**: `slm_lab/spec/benchmark/playground/` — SAC specs for all 54 envs across 3 categories
|
| 11 |
+
|
| 12 |
+
<!-- TODO: Add benchmark results from DM Control Suite baseline runs (task #11) -->
|
| 13 |
+
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
# SLM-Lab v5.2.0
|
| 17 |
|
| 18 |
Training path performance optimization. **+15% SAC throughput on GPU**, verified with no score regression.
|
docs/PHASE5_OPS.md
ADDED
|
@@ -0,0 +1,650 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Phase 5.1 PPO — Operations Tracker
|
| 2 |
+
|
| 3 |
+
Single source of truth for in-flight work. Resume from here.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Principles
|
| 8 |
+
|
| 9 |
+
1. **Two canonical specs**: `ppo_playground` (DM Control) and `ppo_playground_loco` (Loco). Per-env variants only when officially required: `ppo_playground_fingerspin` (gamma=0.95), `ppo_playground_pendulum` (training_epoch=4, action_repeat=4 via code).
|
| 10 |
+
2. **100M frames hard cap** — no extended runs. If an env doesn't hit target at 100M, fix the spec.
|
| 11 |
+
3. **Strategic reruns**: only rerun failing/⚠️ envs. Already-✅ envs skip revalidation.
|
| 12 |
+
4. **Score metric**: use `total_reward_ma` (final moving average of total reward) — measures end-of-training performance and matches mujoco_playground reference scores.
|
| 13 |
+
5. **Official reference**: check `~/.cache/uv/archive-v0/ON8dY3irQZTYI3Bok0SlC/mujoco_playground/config/dm_control_suite_params.py` for per-env overrides.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## Wave 3 (2026-03-16)
|
| 18 |
+
|
| 19 |
+
**Fixes applied:**
|
| 20 |
+
- stderr suppression: MuJoCo C-level warnings (ccd_iterations, nefc overflow, broadphase overflow) silenced in playground.py
|
| 21 |
+
- obs fix: _get_obs now passes only "state" key for dict-obs envs (was incorrectly concatenating privileged_state+state)
|
| 22 |
+
|
| 23 |
+
**Envs graduated to ✅ (close enough):**
|
| 24 |
+
FishSwim, PointMass, ReacherHard, WalkerStand, WalkerWalk, SpotGetup, SpotJoystickGaitTracking, AlohaHandOver
|
| 25 |
+
|
| 26 |
+
**Failing envs by root cause:**
|
| 27 |
+
- Humanoid double-norm (rs10 fix): HumanoidStand (114→700), HumanoidWalk (47→500), HumanoidRun (18→130)
|
| 28 |
+
- Dict obs fix (now fixed): Go1Flat/Rough/Getup/Handstand, G1Flat/Rough, T1Flat/Rough
|
| 29 |
+
- Unknown: BarkourJoystick (0/35), Op3Joystick (0/20)
|
| 30 |
+
- Needs hparam work: H1Inplace (4→10), H1Joystick (16→30), SpotFlat (11→30)
|
| 31 |
+
- Manipulation: AlohaPeg (188→300), LeapCubeReorient (74→200)
|
| 32 |
+
- Infeasible: PandaRobotiqPushCube, AeroCubeRotateZAxis
|
| 33 |
+
|
| 34 |
+
**Currently running:** (to be populated by ops)
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## Currently Running (as of 2026-03-14 ~00:00)
|
| 39 |
+
|
| 40 |
+
**Wave V (p5-ppo17) — Constant LR test (4 runs, just launched)**
|
| 41 |
+
|
| 42 |
+
Testing constant LR (Brax default) in isolation — never tested before. Key hypothesis: LR decay hurts late-converging envs.
|
| 43 |
+
|
| 44 |
+
| Run | Env | Spec | Key Change | Old Best | Target |
|
| 45 |
+
|---|---|---|---|---|---|
|
| 46 |
+
| p5-ppo17-csup | CartpoleSwingup | constlr | constant LR + minibatch=4096 | 576.1 | 800 |
|
| 47 |
+
| p5-ppo17-csupsparse | CartpoleSwingupSparse | constlr | constant LR + minibatch=4096 | 296.3 | 425 |
|
| 48 |
+
| p5-ppo17-acrobot | AcrobotSwingup | vnorm_constlr | constant LR + vnorm | 173 | 220 |
|
| 49 |
+
| p5-ppo17-fteasy | FingerTurnEasy | vnorm_constlr | constant LR + vnorm | 571 | 950 |
|
| 50 |
+
|
| 51 |
+
**Wave IV-H (p5-ppo16h) — Humanoid with wider policy (3 runs, ~2.5h remaining)**
|
| 52 |
+
|
| 53 |
+
New `ppo_playground_humanoid` variant: 2×256 policy (vs 2×64), constant LR, vnorm=true.
|
| 54 |
+
Based on Phase 3 Gymnasium Humanoid-v5 success (2661 MA with 2×256 + constant LR).
|
| 55 |
+
|
| 56 |
+
| Run | Env | Old Best | Target |
|
| 57 |
+
|---|---|---|---|
|
| 58 |
+
| p5-ppo16h-hstand | HumanoidStand | 18.36 | 700 |
|
| 59 |
+
| p5-ppo16h-hwalk | HumanoidWalk | 7.68 | 500 |
|
| 60 |
+
| p5-ppo16h-hrun | HumanoidRun | 3.19 | 130 |
|
| 61 |
+
|
| 62 |
+
**Wave VI (p5-ppo18) — Brax 4×32 policy + constant LR + vnorm (3 runs, just launched)**
|
| 63 |
+
|
| 64 |
+
Testing Brax default policy architecture (4 layers × 32 units vs our 2 × 64).
|
| 65 |
+
Deeper narrower policy may learn better features for precision tasks.
|
| 66 |
+
|
| 67 |
+
| Run | Env | Old Best | Target |
|
| 68 |
+
|---|---|---|---|
|
| 69 |
+
| p5-ppo18-fteasy | FingerTurnEasy | 571 | 950 |
|
| 70 |
+
| p5-ppo18-fthard | FingerTurnHard | 484 | 950 |
|
| 71 |
+
| p5-ppo18-fishswim | FishSwim | 463 | 650 |
|
| 72 |
+
|
| 73 |
+
**Wave IV tail (p5-ppo16) — completed**
|
| 74 |
+
|
| 75 |
+
| Run | Env | strength | Target | New best? |
|
| 76 |
+
|---|---|---|---|---|
|
| 77 |
+
| p5-ppo16-swimmer6 | SwimmerSwimmer6 | 509.3 | 560 | ✅ New best (final_strength=560.6) |
|
| 78 |
+
| p5-ppo16-fishswim | FishSwim | 420.6 | 650 | ❌ Worse than 463 |
|
| 79 |
+
|
| 80 |
+
**Wave IV results (p5-ppo16, vnorm=true rerun with reverted spec — completed):**
|
| 81 |
+
|
| 82 |
+
All ran with vnorm=true. CartpoleSwingup/Sparse worse (vnorm=false is better for them — wrong setting).
|
| 83 |
+
Precision envs also scored below old bests. Humanoid still failing with standard 2×64 policy.
|
| 84 |
+
|
| 85 |
+
| Env | p16 strength | Old Best | Target | Verdict |
|
| 86 |
+
|---|---|---|---|---|
|
| 87 |
+
| CartpoleSwingup | 316.2 | 576.1 (false) | 800 | ❌ wrong vnorm |
|
| 88 |
+
| CartpoleSwingupSparse | 288.7 | 296.3 (false) | 425 | ❌ wrong vnorm |
|
| 89 |
+
| AcrobotSwingup | 145.4 | 173 (true) | 220 | ❌ worse |
|
| 90 |
+
| FingerTurnEasy | 511.1 | 571 (true) | 950 | ❌ worse |
|
| 91 |
+
| FingerTurnHard | 368.6 | 484 (true) | 950 | ❌ worse |
|
| 92 |
+
| HumanoidStand | 12.72 | 18.36 | 700 | ❌ still failing |
|
| 93 |
+
| HumanoidWalk | 7.46 | 7.68 | 500 | ❌ still failing |
|
| 94 |
+
| HumanoidRun | 3.19 | 3.19 | 130 | ❌ still failing |
|
| 95 |
+
|
| 96 |
+
**CONCLUSION**: Reverted spec didn't help. No new bests. Consistency was negative for CartpoleSwingup/Sparse (high variance).
|
| 97 |
+
Need constant LR test (Wave V) and wider policy for Humanoid (Wave IV-H).
|
| 98 |
+
|
| 99 |
+
**Wave III results (p5-ppo13/p5-ppo15, 5-layer value + no grad clip — completed):**
|
| 100 |
+
|
| 101 |
+
Only CartpoleSwingup improved slightly (623.8 vs 576.1). All others regressed.
|
| 102 |
+
FishSwim p5-ppo15: strength=411.6 (vs 463 old best). AcrobotSwingup p5-ppo15: strength=95.4 (vs 173).
|
| 103 |
+
|
| 104 |
+
**CONCLUSION**: 5-layer value + no grad clip is NOT a general improvement. Reverted to 3-layer + clip_grad_val=1.0.
|
| 105 |
+
|
| 106 |
+
**Wave H results (p5-ppo12, ALL completed — NONE improved over old bests):**
|
| 107 |
+
Re-running same spec (variance reruns + vnorm) didn't help. Run-to-run variance is high but
|
| 108 |
+
old bests represent lucky runs. Hyperparameter tuning has hit diminishing returns.
|
| 109 |
+
|
| 110 |
+
**Wave G/G2 results (normalize_v_targets=false ablation, ALL completed):**
|
| 111 |
+
|
| 112 |
+
| Env | p11 strength | Old Best (true) | Target | Change | Verdict |
|
| 113 |
+
|---|---|---|---|---|---|
|
| 114 |
+
| **PendulumSwingup** | **533.5** | 276 | 395 | +93% | **✅ NEW PASS** |
|
| 115 |
+
| **FingerSpin** | **652.4** | 561 | 600 | +16% | **✅ NEW PASS** |
|
| 116 |
+
| **CartpoleBalanceSparse** | **690.4** | 545 | 700 | +27% | **⚠️ 99% of target** |
|
| 117 |
+
| **CartpoleSwingup** | **576.1** | 443/506 | 800 | +30% | ⚠️ improved |
|
| 118 |
+
| **CartpoleSwingupSparse** | **296.3** | 271 | 425 | +9% | ⚠️ improved |
|
| 119 |
+
| PointMass | 854.4 | 863 | 900 | -1% | ⚠️ same |
|
| 120 |
+
| FishSwim | 293.9 | 463 | 650 | -36% | ❌ regression |
|
| 121 |
+
| FingerTurnEasy | 441.1 | 571 | 950 | -23% | ❌ regression |
|
| 122 |
+
| SwimmerSwimmer6 | 386.2 | 485 | 560 | -20% | ❌ regression |
|
| 123 |
+
| FingerTurnHard | 335.7 | 484 | 950 | -31% | ❌ regression |
|
| 124 |
+
| AcrobotSwingup | 105.1 | 173 | 220 | -39% | ❌ regression |
|
| 125 |
+
| HumanoidStand | 12.87 | 18.36 | 500 | -30% | ❌ still failing |
|
| 126 |
+
|
| 127 |
+
**CONCLUSION**: `normalize_v_targets: false` helps 5/12, hurts 6/12, neutral 1/12.
|
| 128 |
+
- **false wins**: PendulumSwingup, FingerSpin, CartpoleBalanceSparse, CartpoleSwingup, CartpoleSwingupSparse
|
| 129 |
+
- **true wins**: FishSwim, FingerTurnEasy/Hard, SwimmerSwimmer6, AcrobotSwingup, PointMass
|
| 130 |
+
- **Decision**: Per-env spec selection. New `ppo_playground_vnorm` variant for precision envs.
|
| 131 |
+
|
| 132 |
+
**Wave F results (multi-unroll=16 + proven hyperparameters):**
|
| 133 |
+
|
| 134 |
+
| Env | p10 strength | p10 final_str | Old best str | Target | Verdict |
|
| 135 |
+
|---|---|---|---|---|---|
|
| 136 |
+
| CartpoleSwingup | 342 | 443 | 443 | 800 | Same |
|
| 137 |
+
| FingerTurnEasy | 529 | 685 | 571 | 950 | Better final, worse strength |
|
| 138 |
+
| FingerSpin | 402 | 597 | 561 | 600 | Better final (near target!), worse strength |
|
| 139 |
+
| FingerTurnHard | 368 | 559 | 484 | 950 | Better final, worse strength |
|
| 140 |
+
| SwimmerSwimmer6 | 251 | 384 | 485 | 560 | Worse |
|
| 141 |
+
| CartpoleSwingupSparse | 56 | 158 | 271 | 425 | MUCH worse |
|
| 142 |
+
| AcrobotSwingup | 31 | 63 | 173 | 220 | MUCH worse |
|
| 143 |
+
|
| 144 |
+
**CONCLUSION**: Multi-unroll adds no benefit over single-unroll for any env by `strength` metric.
|
| 145 |
+
The `final_strength` improvements for Finger tasks are offset by `strength` regressions.
|
| 146 |
+
Root cause: stale old_net (480 vs 30 steps between copies) makes policy ratio less accurate.
|
| 147 |
+
**Spec reverted to single-unroll (num_unrolls=1)**. Multi-unroll code preserved in ppo.py.
|
| 148 |
+
|
| 149 |
+
**Wave E results (multi-unroll + Brax hyperparameters — ALL worse):**
|
| 150 |
+
|
| 151 |
+
Brax-matched spec (clip_eps=0.3, constant LR, 5-layer value, reward_scale=10, minibatch=30720)
|
| 152 |
+
hurt every env except HopperStand (which used wrong spec before). Reverted.
|
| 153 |
+
|
| 154 |
+
**Wave C completed results** (all reward_scale=10, divide by 10 for true score):
|
| 155 |
+
|
| 156 |
+
| Run | Env | strength/10 | final_strength/10 | total_reward_ma/10 | Target | vs Old |
|
| 157 |
+
|---|---|---|---|---|---|---|
|
| 158 |
+
| p5-ppo7-cartpoleswingup | CartpoleSwingup | 556.6 | 670.5 | 705.3 | 800 | 443→557 ✅ improved |
|
| 159 |
+
| p5-ppo7-fingerturneasy | FingerTurnEasy | 511.1 | 693.2 | 687.0 | 950 | 571→511 ❌ **WORSE** |
|
| 160 |
+
| p5-ppo7-fingerturnhard | FingerTurnHard | 321.9 | 416.8 | 425.2 | 950 | 484→322 ❌ **WORSE** |
|
| 161 |
+
| p5-ppo7-cartpoleswingupsparse2 | CartpoleSwingupSparse | 144.0 | 360.6 | 337.7 | 425 | 271→144 ❌ **WORSE** |
|
| 162 |
+
|
| 163 |
+
**KEY FINDING**: time_horizon=480 helps CartpoleSwingup (+25%) but HURTS FingerTurn (-30 to -50%) and CartpoleSwingupSparse (-47%). Long GAE horizons produce noisy advantage estimates for precision/sparse tasks. The official Brax approach is 16×30-step unrolls (short GAE per unroll), NOT 1×480-step unroll.
|
| 164 |
+
|
| 165 |
+
---
|
| 166 |
+
|
| 167 |
+
## Spec Changes Applied (2026-03-13)
|
| 168 |
+
|
| 169 |
+
### Fix 1: reward_scale=10.0 (matches official mujoco_playground)
|
| 170 |
+
- `playground.py`: `PlaygroundVecEnv` now multiplies rewards by `self._reward_scale`
|
| 171 |
+
- `__init__.py`: threads `reward_scale` from env spec to wrapper
|
| 172 |
+
- `ppo_playground.yaml`: `reward_scale: 10.0` in shared `_env` anchor
|
| 173 |
+
|
| 174 |
+
### Fix 2: Revert minibatch_size 2048→4096 (fixes CartpoleSwingup regression)
|
| 175 |
+
- `ppo_playground.yaml`: all DM Control specs (ppo_playground, fingerspin, pendulum) now use minibatch_size=4096
|
| 176 |
+
- 15 minibatches × 16 epochs = 240 grad steps (was 30×16=480)
|
| 177 |
+
- Restores p5-ppo5 performance for CartpoleSwingup (803 vs 443)
|
| 178 |
+
|
| 179 |
+
### Fix 3: Brax-matched spec (commit 6eb08fe9) — time_horizon=480, clip_eps=0.3, constant LR, 5-layer value net
|
| 180 |
+
- Increased time_horizon from 30→480 to match total data per update (983K transitions)
|
| 181 |
+
- clip_eps 0.2→0.3, constant LR (min_factor=1.0), 5-layer [256×5] value net
|
| 182 |
+
- action std upper bound raised (max=2.0 in policy_util.py)
|
| 183 |
+
- **Result**: CartpoleSwingup improved (443→557 strength), but FingerTurn and CartpoleSwingupSparse got WORSE
|
| 184 |
+
- **Root cause**: 1×480-step unroll computes GAE over 480 steps (noisy), vs official 16×30-step unrolls (short, accurate GAE)
|
| 185 |
+
|
| 186 |
+
### Fix 4: ppo_playground_short variant (time_horizon=30 + Brax improvements)
|
| 187 |
+
- Keeps: reward_scale=10, clip_eps=0.3, constant LR, 5-layer value net, no grad clipping
|
| 188 |
+
- Reverts: time_horizon=30, minibatch_size=4096 (15 minibatches, 240 grad steps)
|
| 189 |
+
- **Hypothesis**: Short GAE + other Brax improvements = best of both worlds for precision tasks
|
| 190 |
+
- Testing on FingerTurnEasy/Hard first (Wave D p5-ppo8-*)
|
| 191 |
+
|
| 192 |
+
### Fix 5: Multi-unroll collection (IMPLEMENTED but NOT USED — code stays, spec reverted)
|
| 193 |
+
- Added `num_unrolls` parameter to PPO (ppo.py, actor_critic.py). Code works correctly.
|
| 194 |
+
- **Brax-matched spec (Wave E, p5-ppo9)**: clip_eps=0.3, constant LR, 5-layer value, reward_scale=10
|
| 195 |
+
- Result: WORSE on 5/7 tested envs. Only CartpoleSwingup improved (443→506).
|
| 196 |
+
- Root cause: minibatch_size=30720 → 7.5x fewer gradient steps per transition → underfitting
|
| 197 |
+
- **Reverted spec + multi-unroll (Wave F, p5-ppo10)**: clip_eps=0.2, LR decay, 3-layer value, minibatch=4096
|
| 198 |
+
- Result: Same or WORSE on all envs by `strength` metric. Same fps as single-unroll.
|
| 199 |
+
- Training compute per env step is identical, but old_net staleness (480 vs 30 steps) hurts.
|
| 200 |
+
- **Conclusion**: Multi-unroll adds complexity without benefit. Reverted spec to single-unroll (num_unrolls=1).
|
| 201 |
+
Code preserved in ppo.py (defaults to 1). Spec uses original hyperparameters.
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
## Completed Runs Needing Intake
|
| 206 |
+
|
| 207 |
+
### Humanoid (ppo_playground_loco, post log_std fix) — intake immediately
|
| 208 |
+
|
| 209 |
+
| Run | HF Folder | strength | target | HF status |
|
| 210 |
+
|---|---|---|---|---|
|
| 211 |
+
| p5-ppo6-humanoidrun | ppo_playground_loco_humanoidrun_2026_03_12_175917 | 2.78 | 130 | ✅ uploaded |
|
| 212 |
+
| p5-ppo6-humanoidwalk | ppo_playground_loco_humanoidwalk_2026_03_12_175817 | 6.82 | 500 | ✅ uploaded |
|
| 213 |
+
| p5-ppo6-humanoidstand | ppo_playground_loco_humanoidstand_2026_03_12_175810 | 12.45 | 700 | ❌ **UPLOAD FAILED (412)** — re-upload first |
|
| 214 |
+
|
| 215 |
+
Re-upload HumanoidStand:
|
| 216 |
+
```bash
|
| 217 |
+
source .env && huggingface-cli upload SLM-Lab/benchmark-dev \
|
| 218 |
+
hf_data/data/benchmark-dev/data/ppo_playground_loco_humanoidstand_2026_03_12_175810 \
|
| 219 |
+
data/ppo_playground_loco_humanoidstand_2026_03_12_175810 --repo-type dataset
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
**Conclusion**: loco spec still fails completely for Humanoid — log_std fix insufficient. See spec fixes below.
|
| 223 |
+
|
| 224 |
+
### BENCHMARKS.md correction needed (commit b6ef49d9 used wrong metric)
|
| 225 |
+
|
| 226 |
+
intake-a used `total_reward_ma` instead of `strength`. Fix these 4 entries:
|
| 227 |
+
|
| 228 |
+
| Env | Run | strength (correct) | total_reward_ma (wrong) | target |
|
| 229 |
+
|---|---|---|---|---|
|
| 230 |
+
| AcrobotSwingup | p5-ppo6-acrobotswingup2 | **172.8** | 253.24 | 220 |
|
| 231 |
+
| CartpoleBalanceSparse | p5-ppo6-cartpolebalancesparse2 | **545.1** | 991.81 | 700 |
|
| 232 |
+
| CartpoleSwingup | p5-ppo6-cartpoleswingup2 | **unknown — extract from logs** | 641.51 | 800 |
|
| 233 |
+
| CartpoleSwingupSparse | p5-ppo6-cartpoleswingupsparse | **270.9** | 331.23 | 425 |
|
| 234 |
+
|
| 235 |
+
Extract correct values: `dstack logs p5-ppo6-NAME --since 6h 2>&1 | grep "trial_metrics" | tail -1` → use `strength:` field.
|
| 236 |
+
|
| 237 |
+
Also check FingerSpin: `dstack logs p5-ppo6-fingerspin2 --since 6h | grep trial_metrics | tail -1` — confirm strength value.
|
| 238 |
+
|
| 239 |
+
**Metric decision needed**: strength penalizes slow learners (CartpoleBalanceSparse strength=545 but final MA=992). Consider switching ALL entries to `final_strength`. But this requires auditing every existing entry — do it as a batch before publishing.
|
| 240 |
+
|
| 241 |
+
---
|
| 242 |
+
|
| 243 |
+
## Queue (launch when slots open, all 100M)
|
| 244 |
+
|
| 245 |
+
| Priority | Env | Spec | Run name | Rationale |
|
| 246 |
+
|---|---|---|---|---|
|
| 247 |
+
| 1 | PendulumSwingup | ppo_playground_pendulum | p5-ppo6-pendulumswingup | action_repeat=4 + training_epoch=4 (code fix applied) |
|
| 248 |
+
| 2 | FingerSpin | ppo_playground_fingerspin | p5-ppo6-fingerspin3 | canonical gamma=0.95 run; fingerspin2 used gamma=0.995 (override silently ignored) |
|
| 249 |
+
|
| 250 |
+
---
|
| 251 |
+
|
| 252 |
+
## Full Env Status
|
| 253 |
+
|
| 254 |
+
### ✅ Complete (13/25)
|
| 255 |
+
| Env | strength | target | normalize_v_targets |
|
| 256 |
+
|---|---|---|---|
|
| 257 |
+
| CartpoleBalance | 968.23 | 950 | true |
|
| 258 |
+
| AcrobotSwingupSparse | 42.74 | 15 | true |
|
| 259 |
+
| BallInCup | 942.44 | 680 | true |
|
| 260 |
+
| CheetahRun | 865.83 | 850 | true |
|
| 261 |
+
| ReacherEasy | 955.08 | 950 | true |
|
| 262 |
+
| ReacherHard | 946.99 | 950 | true |
|
| 263 |
+
| WalkerRun | 637.80 | 560 | true |
|
| 264 |
+
| WalkerStand | 970.94 | 1000 | true |
|
| 265 |
+
| WalkerWalk | 952 | 960 | true |
|
| 266 |
+
| HopperHop | 22.00 | ~2 | true |
|
| 267 |
+
| HopperStand | 118.2 | ~70 | true |
|
| 268 |
+
| PendulumSwingup | 533.5 | 395 | **false** |
|
| 269 |
+
| FingerSpin | 652.4 | 600 | **false** |
|
| 270 |
+
|
| 271 |
+
### ⚠️ Below target (9/25)
|
| 272 |
+
| Env | best strength | target | best with | status |
|
| 273 |
+
|---|---|---|---|---|
|
| 274 |
+
| CartpoleSwingup | 576.1 | 800 | false | Improved +30% from 443 (true) |
|
| 275 |
+
| CartpoleBalanceSparse | 545 | 700 | true | Testing false (p5-ppo11) |
|
| 276 |
+
| CartpoleSwingupSparse | 296.3 | 425 | false | Improved +9% from 271 (true) |
|
| 277 |
+
| AcrobotSwingup | 173 | 220 | true | false=105, regressed |
|
| 278 |
+
| FingerTurnEasy | 571 | 950 | true | false=441, regressed |
|
| 279 |
+
| FingerTurnHard | 484 | 950 | true | false=336, regressed |
|
| 280 |
+
| FishSwim | 463 | 650 | true | Testing false (p5-ppo11) |
|
| 281 |
+
| SwimmerSwimmer6 | 509.3 | 560 | true | final_strength=560.6 (at target!) |
|
| 282 |
+
| PointMass | 863 | 900 | true | false=854, ~same |
|
| 283 |
+
|
| 284 |
+
### ❌ Fundamental failure — Humanoid (3/25)
|
| 285 |
+
| Env | best strength | target | diagnosis |
|
| 286 |
+
|---|---|---|---|
|
| 287 |
+
| HumanoidRun | 3.19 | 130 | <3% target, NormalTanh distribution needed |
|
| 288 |
+
| HumanoidWalk | 7.68 | 500 | <2% target, wider policy (2×256) didn't help |
|
| 289 |
+
| HumanoidStand | 18.36 | 700 | <3% target, constant LR + wider policy tested, no improvement |
|
| 290 |
+
|
| 291 |
+
**Humanoid tested and failed**: wider 2×256 policy + constant LR + vnorm (Wave IV-H). MA stayed flat at 8-10 for HumanoidStand over entire training. Root cause is likely NormalTanh distribution (state-dependent std + tanh squashing) — a fundamental architectural difference from Brax.
|
| 292 |
+
|
| 293 |
+
---
|
| 294 |
+
|
| 295 |
+
## Spec Fixes Required
|
| 296 |
+
|
| 297 |
+
### Priority 1: Humanoid loco spec (update ppo_playground_loco)
|
| 298 |
+
|
| 299 |
+
Official uses `num_envs=8192, time_horizon=20 (unroll_length)` for loco. We use `num_envs=2048, time_horizon=64`.
|
| 300 |
+
|
| 301 |
+
**Proposed update to ppo_playground_loco**:
|
| 302 |
+
```yaml
|
| 303 |
+
ppo_playground_loco:
|
| 304 |
+
agent:
|
| 305 |
+
algorithm:
|
| 306 |
+
gamma: 0.97
|
| 307 |
+
time_horizon: 20 # was 64; official unroll_length=20
|
| 308 |
+
training_epoch: 4
|
| 309 |
+
env:
|
| 310 |
+
num_envs: 8192 # was 2048; official loco num_envs=8192
|
| 311 |
+
```
|
| 312 |
+
|
| 313 |
+
**Before launching**: verify VRAM by checking if 8192 envs fits A4500 20GB. Run one Humanoid env, check `dstack logs NAME --since 10m | grep -i "memory\|OOM"` after 5 min.
|
| 314 |
+
|
| 315 |
+
**Rerun only**: HumanoidRun, HumanoidWalk, HumanoidStand (3 runs). HopperStand also uses loco spec — add if VRAM confirmed OK.
|
| 316 |
+
|
| 317 |
+
### Priority 2: CartpoleSwingup regression
|
| 318 |
+
|
| 319 |
+
p5-ppo5 scored 803 ✅; p5-ppo6 scored ~641. The p5-ppo6 change was `minibatch_size: 2048` (30 minibatches) vs p5-ppo5's 4096 (15 minibatches). More gradient steps per iter hurt CartpoleSwingup.
|
| 320 |
+
|
| 321 |
+
**Option A**: Revert `ppo_playground` minibatch_size from 2048→4096 (15 minibatches). Rerun only failing DM Control envs (CartpoleSwingup, CartpoleSwingupSparse, + any that need it).
|
| 322 |
+
|
| 323 |
+
**Option B**: Accept 641 and note the trade-off — p5-ppo6 improved other envs (CartpoleBalance 968 was already ✅).
|
| 324 |
+
|
| 325 |
+
### Priority 3: FingerTurnEasy/Hard
|
| 326 |
+
|
| 327 |
+
No official override. At 570/? vs target 950, gap is large. Check:
|
| 328 |
+
```bash
|
| 329 |
+
grep -A10 "Finger" ~/.cache/uv/archive-v0/ON8dY3irQZTYI3Bok0SlC/mujoco_playground/config/dm_control_suite_params.py
|
| 330 |
+
```
|
| 331 |
+
|
| 332 |
+
May need deeper policy network [32,32,32,32] (official arch) vs our [64,64].
|
| 333 |
+
|
| 334 |
+
---
|
| 335 |
+
|
| 336 |
+
## Tuning Principles Learned
|
| 337 |
+
|
| 338 |
+
1. **Check official per-env overrides first**: `dm_control_suite_params.py` has `discounting`, `action_repeat`, `num_updates_per_batch` per env. These are canonical.
|
| 339 |
+
|
| 340 |
+
2. **action_repeat** is env-level, not spec-level. Implemented in `playground.py` via `_ACTION_REPEAT` dict. PendulumSwingup→4. Add others as found.
|
| 341 |
+
|
| 342 |
+
3. **NaN loss**: `log_std` clamp max=0.5 helps but Humanoid (21 DOF) still has many NaN skips. Rate-limited to log every 10K. If NaN dominates → spec is wrong.
|
| 343 |
+
|
| 344 |
+
4. **num_envs scales with task complexity**: Cartpole/Acrobot: 2048 fine. Humanoid locomotion: needs 8192 for rollout diversity.
|
| 345 |
+
|
| 346 |
+
5. **time_horizon (unroll_length)**: DM Control official=30, loco official=20. Longer → more correlated rollouts → less diversity per update. Match official.
|
| 347 |
+
|
| 348 |
+
6. **Minibatch count**: more minibatches = more gradient steps per batch. Can overfit or slow convergence for simpler envs. 15 minibatches (p5-ppo5) vs 30 (p5-ppo6) — the latter hurt CartpoleSwingup.
|
| 349 |
+
|
| 350 |
+
7. **Sparse reward + strength metric**: strength (trajectory mean) severely penalizes sparse/delayed convergence. CartpoleBalanceSparse strength=545 but final MA=992. Resolve metric before publishing.
|
| 351 |
+
|
| 352 |
+
8. **High seed variance** (consistency < 0): some seeds solve, some don't → wrong spec, not bad luck. Fix exploration (entropy_coef) or use different spec.
|
| 353 |
+
|
| 354 |
+
9. **-s overrides are silently ignored** if the YAML key isn't a `${variable}` placeholder. Always verify overrides took effect via logs: `grep "gamma\|lr\|training_epoch" dstack logs`.
|
| 355 |
+
|
| 356 |
+
10. **Loco spec failures**: if loco spec gives <20 on env with target >100, the issue is almost certainly num_envs/time_horizon mismatch vs official, not a fundamental algo failure.
|
| 357 |
+
|
| 358 |
+
---
|
| 359 |
+
|
| 360 |
+
## Code Changes This Session
|
| 361 |
+
|
| 362 |
+
| Commit | Change |
|
| 363 |
+
|---|---|
|
| 364 |
+
| `8fe7bc76` | `playground.py`: `_ACTION_REPEAT` lookup for per-env action_repeat. `ppo_playground.yaml`: added `ppo_playground_fingerspin` and `ppo_playground_pendulum` specs. |
|
| 365 |
+
| `fb55c2f9` | `base.py`: rate-limit NaN loss warning (every 10K skips). `ppo_playground.yaml`: revert log_frequency 1M→100K. |
|
| 366 |
+
| `3f4ede3d` | BENCHMARKS.md: mark HopperHop ✅. |
|
| 367 |
+
|
| 368 |
+
---
|
| 369 |
+
|
| 370 |
+
## Resume Commands
|
| 371 |
+
|
| 372 |
+
```bash
|
| 373 |
+
# Setup
|
| 374 |
+
git pull && uv sync --no-default-groups
|
| 375 |
+
|
| 376 |
+
# Check jobs
|
| 377 |
+
dstack ps
|
| 378 |
+
|
| 379 |
+
# Intake a completed run
|
| 380 |
+
dstack logs RUN_NAME --since 6h 2>&1 | grep "trial_metrics" | tail -1
|
| 381 |
+
dstack logs RUN_NAME --since 6h 2>&1 | grep -iE "Uploading|benchmark-dev"
|
| 382 |
+
|
| 383 |
+
# Pull HF data
|
| 384 |
+
source .env && huggingface-cli download SLM-Lab/benchmark-dev \
|
| 385 |
+
--local-dir hf_data/data/benchmark-dev --repo-type dataset \
|
| 386 |
+
--include "data/FOLDER_NAME/*"
|
| 387 |
+
|
| 388 |
+
# Plot
|
| 389 |
+
uv run slm-lab plot -t "EnvName" -d hf_data/data/benchmark-dev/data -f FOLDER_NAME
|
| 390 |
+
|
| 391 |
+
# Launch PendulumSwingup (queue priority 1)
|
| 392 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 393 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_pendulum train \
|
| 394 |
+
-s env=playground/PendulumSwingup -s max_frame=100000000 -n p5-ppo6-pendulumswingup
|
| 395 |
+
|
| 396 |
+
# Launch FingerSpin canonical (queue priority 2)
|
| 397 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 398 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_fingerspin train \
|
| 399 |
+
-s env=playground/FingerSpin -s max_frame=100000000 -n p5-ppo6-fingerspin3
|
| 400 |
+
|
| 401 |
+
# Launch Humanoid loco (after updating ppo_playground_loco spec to num_envs=8192, time_horizon=20)
|
| 402 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 403 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_loco train \
|
| 404 |
+
-s env=playground/HumanoidRun -s max_frame=100000000 -n p5-ppo6-humanoidrun2
|
| 405 |
+
```
|
| 406 |
+
|
| 407 |
+
---
|
| 408 |
+
|
| 409 |
+
## CRITICAL CORRECTION (2026-03-13) — Humanoid is DM Control, not Loco
|
| 410 |
+
|
| 411 |
+
**Root cause of Humanoid failure**: HumanoidRun/Walk/Stand are registered in `dm_control_suite/__init__.py` — they ARE DM Control envs. We incorrectly ran them with `ppo_playground_loco` (gamma=0.97, 4 epochs, time_horizon=64).
|
| 412 |
+
|
| 413 |
+
Official config uses DEFAULT DM Control params for them: discounting=0.995, 2048 envs, lr=1e-3, unroll_length=30, 16 epochs.
|
| 414 |
+
|
| 415 |
+
**NaN was never the root cause** — intake-b confirmed NaN skips were 0, 0, 2 in the loco runs. The spec was simply wrong.
|
| 416 |
+
|
| 417 |
+
**Fix**: Run all 3 Humanoid envs with `ppo_playground` (DM Control spec). No spec change needed.
|
| 418 |
+
|
| 419 |
+
```bash
|
| 420 |
+
# Launch with correct spec
|
| 421 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 422 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \
|
| 423 |
+
-s env=playground/HumanoidRun -s max_frame=100000000 -n p5-ppo6-humanoidrun2
|
| 424 |
+
|
| 425 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 426 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \
|
| 427 |
+
-s env=playground/HumanoidWalk -s max_frame=100000000 -n p5-ppo6-humanoidwalk2
|
| 428 |
+
|
| 429 |
+
source .env && uv run slm-lab run-remote --gpu \
|
| 430 |
+
slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \
|
| 431 |
+
-s env=playground/HumanoidStand -s max_frame=100000000 -n p5-ppo6-humanoidstand2
|
| 432 |
+
```
|
| 433 |
+
|
| 434 |
+
**HopperStand**: Also a DM Control env. If p5-ppo6-hopperstand (loco spec, 16.38) is below target, rerun with `ppo_playground`.
|
| 435 |
+
|
| 436 |
+
**Do NOT intake** the loco-spec Humanoid runs (2.78/6.82/12.45) — wrong spec, not valid benchmark results. The old ppo_playground runs (2.86/3.73) were also wrong spec but at least the right family.
|
| 437 |
+
|
| 438 |
+
**Updated queue (prepend these as highest priority)**:
|
| 439 |
+
|
| 440 |
+
| Priority | Env | Spec | Run name |
|
| 441 |
+
|---|---|---|---|
|
| 442 |
+
| 0 | HumanoidRun | ppo_playground | p5-ppo6-humanoidrun2 |
|
| 443 |
+
| 0 | HumanoidWalk | ppo_playground | p5-ppo6-humanoidwalk2 |
|
| 444 |
+
| 0 | HumanoidStand | ppo_playground | p5-ppo6-humanoidstand2 |
|
| 445 |
+
| 0 | HopperStand | ppo_playground | p5-ppo6-hopperstand2 (if loco result ⚠️) |
|
| 446 |
+
|
| 447 |
+
Note on loco spec (`ppo_playground_loco`): only for actual locomotion robot envs (Go1, G1, BerkeleyHumanoid, etc.) — NOT for DM Control Humanoid.
|
| 448 |
+
|
| 449 |
+
---
|
| 450 |
+
|
| 451 |
+
## METRIC CORRECTION (2026-03-13) — strength vs final_strength
|
| 452 |
+
|
| 453 |
+
**Problem**: `strength` = trajectory-averaged mean over entire run. For slow-rising envs this severely underrepresents end-of-training performance. After metric correction to `strength`:
|
| 454 |
+
|
| 455 |
+
| Env | strength | total_reward_ma | target | conclusion |
|
| 456 |
+
|---|---|---|---|---|
|
| 457 |
+
| CartpoleSwingup | **443.0** | 641.51 | 800 | Massive regression from p5-ppo5 (803). Strength 443 << 665 (65M result) — curve rises but slow start drags average down |
|
| 458 |
+
| CartpoleBalanceSparse | **545.1** | 991.81 | 700 | Hits target by end (final MA=992) but sparse reward delays convergence |
|
| 459 |
+
| AcrobotSwingup | **172.8** | 253.24 | 220 | Below target by strength, above by final MA |
|
| 460 |
+
| CartpoleSwingupSparse | **270.9** | 331.23 | 425 | Below both metrics |
|
| 461 |
+
|
| 462 |
+
**Resolution needed**: Reference scores from mujoco_playground are end-of-training values, not trajectory averages. `final_strength` (= last eval MA) is the correct comparison metric. **Recommend switching BENCHMARKS.md score column to `final_strength`** and audit all existing entries.
|
| 463 |
+
|
| 464 |
+
**CartpoleSwingup regression** is real regardless of metric: p5-ppo5 `final_strength` would be ~800+, p5-ppo6 `total_reward_ma`=641. The p5-ppo6 minibatch change (2048→30 minibatches) hurt CartpoleSwingup convergence speed. Fix: revert `ppo_playground` minibatch_size to 4096 (15 minibatches) — OR accept and investigate if CartpoleSwingup needs its own spec variant.
|
| 465 |
+
|
| 466 |
+
---
|
| 467 |
+
|
| 468 |
+
## Next Architectural Changes
|
| 469 |
+
|
| 470 |
+
Research-based prioritized list of changes NOT yet tested. Ordered by expected impact across the most envs. Wave I (5-layer value + no grad clip) is currently running — results pending.
|
| 471 |
+
|
| 472 |
+
### Priority 1: NormalTanhDistribution (tanh-squashed actions)
|
| 473 |
+
|
| 474 |
+
**Expected impact**: HIGH — affects FingerTurnEasy/Hard, FishSwim, Humanoid, CartpoleSwingup
|
| 475 |
+
**Implementation complexity**: MEDIUM (new distribution class + policy_util changes)
|
| 476 |
+
**Envs helped**: All continuous-action envs, especially precision/manipulation tasks
|
| 477 |
+
|
| 478 |
+
**What Brax does differently**: Brax uses `NormalTanhDistribution` — samples from `Normal(loc, scale)`, then applies `tanh` to bound actions to [-1, 1]. The log-probability includes a log-det-jacobian correction: `log_prob -= log(1 - tanh(x)^2)`. The scale is parameterized as `softplus(raw_scale) + 0.001` (state-dependent, output by the network).
|
| 479 |
+
|
| 480 |
+
**What SLM-Lab does**: Raw `Normal(loc, scale)` with state-independent `log_std` as an `nn.Parameter`. Actions can exceed [-1, 1] and are silently clipped by the environment. The log-prob does NOT account for this clipping, creating a mismatch between the distribution the agent thinks it's using and the effective action distribution.
|
| 481 |
+
|
| 482 |
+
**Why this matters**:
|
| 483 |
+
1. **Gradient quality**: Without jacobian correction, the policy gradient is biased. Actions near the boundary (common in precise manipulation like FingerTurn) have incorrect log-prob gradients. The agent cannot learn fine boundary control.
|
| 484 |
+
2. **Exploration**: State-dependent std allows the agent to be precise where it's confident and exploratory where uncertain. State-independent std forces uniform exploration across all states — wasteful for tasks requiring both coarse and fine control.
|
| 485 |
+
3. **FingerTurn gap (571/950 = 60%)**: FingerTurn requires precise angular positioning of a fingertip. Without tanh squashing, actions at the boundary are clipped but the log-prob doesn't reflect this — the policy "thinks" it's outputting different actions that are actually identical after clipping. This prevents learning fine-grained control near action limits.
|
| 486 |
+
4. **Humanoid gap (<3%)**: 21 DOF with high-dimensional action space. State-independent std means all joints explore equally. Humanoid needs to stabilize torso (low variance) while exploring leg movement (high variance) — impossible with shared std.
|
| 487 |
+
|
| 488 |
+
**Implementation plan**:
|
| 489 |
+
1. Add `NormalTanhDistribution` class in `slm_lab/lib/distribution.py`:
|
| 490 |
+
- Forward: `action = tanh(Normal(loc, scale).rsample())`
|
| 491 |
+
- log_prob: `Normal.log_prob(atanh(action)) - log(1 - action^2 + eps)`
|
| 492 |
+
- entropy: approximate (no closed form for tanh-Normal)
|
| 493 |
+
2. Modify `policy_util.init_action_pd()` to handle the new distribution
|
| 494 |
+
3. Remove `log_std_init` for playground specs — let the network output both mean and std (state-dependent)
|
| 495 |
+
4. Network change: policy output dim doubles (mean + raw_scale per action dim)
|
| 496 |
+
|
| 497 |
+
**Risk**: Medium. Tanh squashing changes gradient dynamics significantly. Need to validate on already-solved envs (CartpoleBalance, WalkerRun) to ensure no regression. Can gate behind a spec flag (`action_pdtype: NormalTanh`).
|
| 498 |
+
|
| 499 |
+
---
|
| 500 |
+
|
| 501 |
+
### Fix 6: Constant LR variants + Humanoid variant (commit pending)
|
| 502 |
+
|
| 503 |
+
Added three new spec variants to `ppo_playground.yaml`:
|
| 504 |
+
- `ppo_playground_constlr`: DM Control + constant LR + minibatch_size=4096. For envs where vnorm=false works.
|
| 505 |
+
- `ppo_playground_vnorm_constlr`: DM Control + vnorm + constant LR + minibatch_size=2048. For precision envs.
|
| 506 |
+
- `ppo_playground_humanoid`: 2×256 policy + constant LR + vnorm. For Humanoid DM Control envs.
|
| 507 |
+
|
| 508 |
+
---
|
| 509 |
+
|
| 510 |
+
### Priority 2: Constant LR (remove LinearToMin decay)
|
| 511 |
+
|
| 512 |
+
**Expected impact**: MEDIUM — affects all envs, especially long-training ones
|
| 513 |
+
**Implementation complexity**: TRIVIAL (spec-only change)
|
| 514 |
+
**Envs helped**: CartpoleSwingup, CartpoleSwingupSparse, FingerTurnEasy/Hard, FishSwim
|
| 515 |
+
|
| 516 |
+
**What Brax does**: Constant LR = 1e-3 for all DM Control envs. No decay.
|
| 517 |
+
|
| 518 |
+
**What SLM-Lab does**: `LinearToMin` decay from 1e-3 to 3.3e-5 (min_factor=0.033) over the full training run.
|
| 519 |
+
|
| 520 |
+
**Why this matters**: By the midpoint of training, SLM-Lab's LR is already at ~5e-4 — half the Brax LR. By 75% of training, it's at ~2.7e-4. For envs that converge late (CartpoleSwingup, FishSwim), the LR is too low during the critical learning phase. Brax maintains full learning capacity throughout.
|
| 521 |
+
|
| 522 |
+
**This was tested as part of the Brax hyperparameter bundle (Wave E) which was ALL worse**, but that test changed 4 things simultaneously (clip_eps=0.3 + constant LR + 5-layer value + reward_scale=10). The constant LR was never tested in isolation.
|
| 523 |
+
|
| 524 |
+
**Implementation**: Set `min_factor: 1.0` in spec (or remove `lr_scheduler_spec` entirely).
|
| 525 |
+
|
| 526 |
+
**Risk**: Low. Constant LR is the Brax default and widely used. If instability occurs late in training, a gentler decay (`min_factor: 0.3`) can be used as fallback.
|
| 527 |
+
|
| 528 |
+
---
|
| 529 |
+
|
| 530 |
+
### Priority 3: Clip epsilon 0.3 (from 0.2)
|
| 531 |
+
|
| 532 |
+
**Expected impact**: MEDIUM — affects all envs
|
| 533 |
+
**Implementation complexity**: TRIVIAL (spec-only change)
|
| 534 |
+
**Envs helped**: FingerTurnEasy/Hard, FishSwim, CartpoleSwingup (tasks needing faster policy adaptation)
|
| 535 |
+
|
| 536 |
+
**What Brax does**: `clipping_epsilon=0.3` for DM Control.
|
| 537 |
+
|
| 538 |
+
**What SLM-Lab does**: `clip_eps=0.2`.
|
| 539 |
+
|
| 540 |
+
**Why this matters**: Clip epsilon 0.2 constrains the policy ratio to [0.8, 1.2]. At 0.3, it's [0.7, 1.3] — allowing 50% larger policy updates per step. For envs that need to explore widely before converging (FingerTurn, FishSwim), the tighter constraint slows learning.
|
| 541 |
+
|
| 542 |
+
**This was tested in the Brax bundle (Wave E) alongside 3 other changes — all worse together.** Never tested in isolation or with just constant LR.
|
| 543 |
+
|
| 544 |
+
**Implementation**: Change `start_val: 0.2` to `start_val: 0.3` in `clip_eps_spec`.
|
| 545 |
+
|
| 546 |
+
**Risk**: Low-medium. Larger clip_eps can cause training instability with small batches. However, with our 61K batch (2048 envs * 30 steps), it should be safe. If combined with constant LR (#2), the compounding effect should be tested carefully.
|
| 547 |
+
|
| 548 |
+
---
|
| 549 |
+
|
| 550 |
+
### Priority 4: Per-env tuning for FingerTurn (if P1-P3 insufficient)
|
| 551 |
+
|
| 552 |
+
**Expected impact**: HIGH for FingerTurn specifically
|
| 553 |
+
**Implementation complexity**: LOW (spec variant)
|
| 554 |
+
**Envs helped**: FingerTurnEasy, FingerTurnHard only
|
| 555 |
+
|
| 556 |
+
If NormalTanh + constant LR + clip_eps=0.3 don't close the FingerTurn gap (currently 60% and 51% of target), try:
|
| 557 |
+
|
| 558 |
+
1. **Lower gamma (0.99 → 0.95)**: FingerSpin uses gamma=0.95 officially. FingerTurn may benefit from shorter horizon discounting since reward is instantaneous (current angle vs target). Lower gamma reduces value function complexity.
|
| 559 |
+
|
| 560 |
+
2. **Smaller policy network**: Brax DM Control uses `(32, 32, 32, 32)` — our `(64, 64)` may over-parameterize for manipulation tasks. Try `(32, 32, 32, 32)` to match exactly.
|
| 561 |
+
|
| 562 |
+
3. **Higher entropy coefficient**: FingerTurn has a narrow solution manifold. Increasing entropy from 0.01 to 0.02 would encourage broader exploration of finger positions.
|
| 563 |
+
|
| 564 |
+
---
|
| 565 |
+
|
| 566 |
+
### Priority 5: Humanoid-specific — num_envs=8192
|
| 567 |
+
|
| 568 |
+
**Expected impact**: HIGH for Humanoid specifically
|
| 569 |
+
**Implementation complexity**: TRIVIAL (spec-only)
|
| 570 |
+
**Envs helped**: HumanoidStand, HumanoidWalk, HumanoidRun
|
| 571 |
+
|
| 572 |
+
**Current situation**: Humanoid was incorrectly run with loco spec (gamma=0.97, 4 epochs). The correction to DM Control spec (gamma=0.995, 16 epochs) is being tested in Wave I (p5-ppo13). However, even with correct spec, the standard 2048 envs may be insufficient.
|
| 573 |
+
|
| 574 |
+
**Why num_envs matters for Humanoid**: 21 DOF, 67-dim observations. With 2048 envs and time_horizon=30, the batch is 61K transitions — each containing a narrow slice of the 21-DOF state space. Humanoid needs more diverse rollouts to learn coordinated multi-joint control. Brax's effective batch of 983K transitions provides 16x more state-space coverage per update.
|
| 575 |
+
|
| 576 |
+
**Since we can't easily get 16x more data per update**, increasing num_envs from 2048 to 4096 or 8192 doubles/quadruples rollout diversity. Combined with NormalTanh (state-dependent std for per-joint exploration), this could be sufficient.
|
| 577 |
+
|
| 578 |
+
**VRAM concern**: 8192 envs may exceed A4500 20GB. Test with a quick 1M frame run first. Fallback: 4096 envs.
|
| 579 |
+
|
| 580 |
+
---
|
| 581 |
+
|
| 582 |
+
### NOT recommended (already tested, no benefit)
|
| 583 |
+
|
| 584 |
+
| Change | Wave | Result | Why it failed |
|
| 585 |
+
|---|---|---|---|
|
| 586 |
+
| normalize_v_targets: false | G/G2 | Mixed (helps 5, hurts 6) | Already per-env split in spec |
|
| 587 |
+
| Multi-unroll (num_unrolls=16) | F | Same or worse by strength | Stale old_net (480 vs 30 steps between copies) |
|
| 588 |
+
| Brax hyperparameter bundle (clip_eps=0.3 + constant LR + 5-layer value + reward_scale=10) | E | All worse | Confounded — 4 changes at once. Individual effects unknown except for reward_scale (helps) |
|
| 589 |
+
| time_horizon=480 (single long unroll) | C | Helps CartpoleSwingup, hurts FingerTurn | 480-step GAE is noisy for precision tasks |
|
| 590 |
+
| 5-layer value + no grad clip | III | Only helped CartpoleSwingup slightly | Hurt AcrobotSwingup, FishSwim; not general |
|
| 591 |
+
| NormalTanh distribution | II | Abandoned | Architecturally incompatible — SLM-Lab stores post-tanh actions, atanh inversion unstable |
|
| 592 |
+
| vnorm=true rerun (reverted spec) | IV | All worse or same | No new information — variance rerun |
|
| 593 |
+
| 4×32 Brax policy + constant LR + vnorm | VI | All worse | FingerTurnEasy 408 (vs 571), FingerTurnHard 244 (vs 484), FishSwim 106 (vs 463) |
|
| 594 |
+
| Humanoid wider 2×256 + constant LR + vnorm | IV-H | No improvement | MA flat at 8-10 for all 3 Humanoid envs; NormalTanh is root cause |
|
| 595 |
+
|
| 596 |
+
### Currently testing
|
| 597 |
+
|
| 598 |
+
### Wave V-B completed results (constant LR)
|
| 599 |
+
|
| 600 |
+
| Env | strength | final_strength | Old best | Verdict |
|
| 601 |
+
|---|---|---|---|---|
|
| 602 |
+
| PointMass | 841.3 | 877.3 | 863.5 | ❌ strength lower |
|
| 603 |
+
| **SwimmerSwimmer6** | **517.3** | 585.7 | 509.3 | ✅ NEW BEST (+1.6%) |
|
| 604 |
+
| FishSwim | 434.6 | 550.8 | 463.0 | ❌ strength lower (final much better) |
|
| 605 |
+
|
| 606 |
+
### Wave VII completed results (clip_eps=0.3 + constant LR)
|
| 607 |
+
|
| 608 |
+
| Env | strength | final_strength | Old best | Verdict |
|
| 609 |
+
|---|---|---|---|---|
|
| 610 |
+
| FingerTurnEasy | 518.0 | 608.8 | 570.9 | ❌ strength lower (final much better, but slow start drags average) |
|
| 611 |
+
| FingerTurnHard | 401.7 | 489.7 | 484.1 | ❌ strength lower (same pattern) |
|
| 612 |
+
| **FishSwim** | **476.9** | 581.4 | 463.0 | ✅ NEW BEST (+3%) |
|
| 613 |
+
|
| 614 |
+
**Key insight**: clip_eps=0.3 produces higher final performance but worse trajectory-averaged strength. The wider clip allows bigger policy updates which increases exploration early (slower convergence) but reaches higher asymptotic performance. The strength metric penalizes late bloomers.
|
| 615 |
+
|
| 616 |
+
### Wave V completed results
|
| 617 |
+
|
| 618 |
+
| Env | strength | final_strength | Old best | Verdict |
|
| 619 |
+
|---|---|---|---|---|
|
| 620 |
+
| CartpoleSwingup | **606.5** | 702.6 | 576.1 | ✅ NEW BEST (+5%) |
|
| 621 |
+
| CartpoleSwingupSparse | **383.7** | 536.2 | 296.3 | ✅ NEW BEST (+29%) |
|
| 622 |
+
| CartpoleBalanceSparse | **757.9** | 993.0 | 690.4 | ✅ NEW BEST (+10%) |
|
| 623 |
+
| AcrobotSwingup | 161.2 | 246.9 | 172.8 | ❌ strength lower (final_strength much better but trajectory avg worse due to slow start) |
|
| 624 |
+
|
| 625 |
+
**Key insight**: Constant LR is the single most impactful change found. LR decay from 1e-3 to 3.3e-5 was hurting late-converging envs. CartpoleBalanceSparse went from 690→993 (final_strength), effectively solved.
|
| 626 |
+
|
| 627 |
+
### Completed waves
|
| 628 |
+
|
| 629 |
+
**Wave VI** (p5-ppo18): 4×32 Brax policy — **STOPPED, all underperformed**. FingerTurnEasy MA 408, FingerTurnHard MA 244, FishSwim MA 106. All below old bests.
|
| 630 |
+
|
| 631 |
+
**Wave IV-H** (p5-ppo16h): Humanoid wider 2×256 + constant LR + vnorm — all flat at MA 8-10.
|
| 632 |
+
|
| 633 |
+
### Next steps after Wave VII
|
| 634 |
+
|
| 635 |
+
1. **Humanoid num_envs=4096/8192** — only major gap remaining after Wave VII
|
| 636 |
+
2. **Consider constant LR + clip_eps=0.3 as new general default** if results hold across envs
|
| 637 |
+
|
| 638 |
+
### Key Brax architecture differences (from source code analysis)
|
| 639 |
+
|
| 640 |
+
| Parameter | Brax Default | SLM-Lab | Impact |
|
| 641 |
+
|---|---|---|---|
|
| 642 |
+
| Policy | 4×32 (deeper, narrower) | 2×64 | **Testable via spec** |
|
| 643 |
+
| Value | 5×256 | 3×256 | Tested Wave III — no help |
|
| 644 |
+
| Distribution | tanh_normal | Normal | **Cannot test** (architectural incompatibility) |
|
| 645 |
+
| Init | lecun_uniform | orthogonal_ | Would need code change |
|
| 646 |
+
| State-dep std | False (scalar) | False (nn.Parameter) | Similar |
|
| 647 |
+
| Activation | swish (SiLU) | SiLU | ✅ Match |
|
| 648 |
+
| clipping_epsilon | 0.3 | 0.2 | **Testable via spec** |
|
| 649 |
+
| num_minibatches | 32 | 15-30 | Close enough |
|
| 650 |
+
| num_unrolls | 16 (implicit) | 1 | Tested Wave F — stale old_net hurts |
|
docs/phase5_brax_comparison.md
ADDED
|
@@ -0,0 +1,446 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Phase 5: Brax PPO vs SLM-Lab PPO — Comprehensive Comparison
|
| 2 |
+
|
| 3 |
+
Source: `google/brax` (latest `main`) and `google-deepmind/mujoco_playground` (latest `main`).
|
| 4 |
+
All values extracted from actual code, not documentation.
|
| 5 |
+
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## 1. Batch Collection Mechanics
|
| 9 |
+
|
| 10 |
+
### Brax
|
| 11 |
+
The training loop in `brax/training/agents/ppo/train.py` (line 586–591) collects data via nested `jax.lax.scan`:
|
| 12 |
+
|
| 13 |
+
```python
|
| 14 |
+
(state, _), data = jax.lax.scan(
|
| 15 |
+
f, (state, key_generate_unroll), (),
|
| 16 |
+
length=batch_size * num_minibatches // num_envs,
|
| 17 |
+
)
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
Each inner call does `generate_unroll(env, state, policy, key, unroll_length)` — a `jax.lax.scan` of `unroll_length` sequential env steps. The outer scan repeats this `batch_size * num_minibatches // num_envs` times **sequentially**, rolling the env state forward continuously.
|
| 21 |
+
|
| 22 |
+
**DM Control default**: `num_envs=2048`, `batch_size=1024`, `num_minibatches=32`, `unroll_length=30`.
|
| 23 |
+
- Outer scan length = `1024 * 32 / 2048 = 16` sequential unrolls.
|
| 24 |
+
- Each unroll = 30 steps.
|
| 25 |
+
- Total data per training step = 16 * 2048 * 30 = **983,040 transitions** reshaped to `(32768, 30)`.
|
| 26 |
+
- Then `num_updates_per_batch=16` SGD passes, each splitting into 32 minibatches.
|
| 27 |
+
- **Effective gradient steps per collect**: 16 * 32 = 512.
|
| 28 |
+
|
| 29 |
+
### SLM-Lab
|
| 30 |
+
`time_horizon=30`, `num_envs=2048` → collects `30 * 2048 = 61,440` transitions.
|
| 31 |
+
`training_epoch=16`, `minibatch_size=4096` → 15 minibatches per epoch → 16 * 15 = 240 gradient steps.
|
| 32 |
+
|
| 33 |
+
### Difference
|
| 34 |
+
**Brax collects 16x more data per training step** by doing 16 sequential unrolls before updating. SLM-Lab does 1 unroll. This means Brax's advantages are computed over much longer trajectories (480 steps vs 30 steps), providing much better value bootstrap targets.
|
| 35 |
+
|
| 36 |
+
Brax also shuffles the entire 983K-transition dataset into minibatches, enabling better gradient estimates.
|
| 37 |
+
|
| 38 |
+
**Classification: CRITICAL**
|
| 39 |
+
|
| 40 |
+
**Fix**: Increase `time_horizon` or implement multi-unroll collection. The simplest fix: increase `time_horizon` from 30 to 480 (= 30 * 16). This gives the same data-per-update ratio. However, this would require more memory. Alternative: keep `time_horizon=30` but change `training_epoch` to 1 and let the loop collect multiple horizons before training — requires architectural changes.
|
| 41 |
+
|
| 42 |
+
**Simplest spec-only fix**: Set `time_horizon=480` (or even 256 as a compromise). This is safe because GAE with `lam=0.95` naturally discounts old data. Risk: memory usage increases 16x for the batch buffer.
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
## 2. Reward Scaling
|
| 47 |
+
|
| 48 |
+
### Brax
|
| 49 |
+
`reward_scaling` is applied **inside the loss function** (`losses.py` line 212):
|
| 50 |
+
```python
|
| 51 |
+
rewards = data.reward * reward_scaling
|
| 52 |
+
```
|
| 53 |
+
This scales rewards just before GAE computation. It does NOT modify the environment rewards.
|
| 54 |
+
|
| 55 |
+
**DM Control default**: `reward_scaling=10.0`
|
| 56 |
+
**Locomotion default**: `reward_scaling=1.0`
|
| 57 |
+
**Manipulation default**: `reward_scaling=1.0` (except PandaPickCubeCartesian: 0.1)
|
| 58 |
+
|
| 59 |
+
### SLM-Lab
|
| 60 |
+
`reward_scale` is applied in the **environment wrapper** (`playground.py` line 149):
|
| 61 |
+
```python
|
| 62 |
+
rewards = np.asarray(self._state.reward) * self._reward_scale
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
**Current spec**: `reward_scale: 10.0` (DM Control)
|
| 66 |
+
|
| 67 |
+
### Difference
|
| 68 |
+
Functionally equivalent — both multiply rewards by a constant before GAE. The location (env vs loss) shouldn't matter for PPO since rewards are only used in GAE computation.
|
| 69 |
+
|
| 70 |
+
**Classification: MINOR** — Already matching for DM Control.
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
## 3. Observation Normalization
|
| 75 |
+
|
| 76 |
+
### Brax
|
| 77 |
+
Uses Welford's online algorithm to track per-feature running mean/std. Applied via `running_statistics.normalize()`:
|
| 78 |
+
```python
|
| 79 |
+
data = (data - mean) / std
|
| 80 |
+
```
|
| 81 |
+
Mean-centered AND divided by std. Updated **every training step** before SGD (line 614).
|
| 82 |
+
`normalize_observations=True` for all environments.
|
| 83 |
+
`std_eps=0.0` (default, no epsilon in std).
|
| 84 |
+
|
| 85 |
+
### SLM-Lab
|
| 86 |
+
Uses gymnasium's `VectorNormalizeObservation` (CPU) or `TorchNormalizeObservation` (GPU), which also uses Welford's algorithm with mean-centering and std division.
|
| 87 |
+
|
| 88 |
+
**Current spec**: `normalize_obs: true`
|
| 89 |
+
|
| 90 |
+
### Difference
|
| 91 |
+
Both use mean-centered running normalization. Brax updates normalizer params inside the training loop (not during rollout), while SLM-Lab updates during rollout (gymnasium wrapper). This is a subtle timing difference but functionally equivalent.
|
| 92 |
+
|
| 93 |
+
Brax uses `std_eps=0.0` by default, while gymnasium uses `epsilon=1e-8`. Minor numerical difference.
|
| 94 |
+
|
| 95 |
+
**Classification: MINOR** — Already matching.
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
## 4. Value Function
|
| 100 |
+
|
| 101 |
+
### Brax
|
| 102 |
+
- **Loss**: Unclipped MSE by default (`losses.py` line 252–263):
|
| 103 |
+
```python
|
| 104 |
+
v_error = vs - baseline
|
| 105 |
+
v_loss = jnp.mean(v_error * v_error) * 0.5 * vf_coefficient
|
| 106 |
+
```
|
| 107 |
+
- **vf_coefficient**: 0.5 (default in `train.py`)
|
| 108 |
+
- **Value clipping**: Only if `clipping_epsilon_value` is set (default `None` = no clipping)
|
| 109 |
+
- **No value target normalization** — raw GAE targets
|
| 110 |
+
- **Separate policy and value networks** (always separate in Brax's architecture)
|
| 111 |
+
- Value network: 5 hidden layers of 256 (DM Control default) with `swish` activation
|
| 112 |
+
- **Bootstrap on timeout**: Optional, default `False`
|
| 113 |
+
|
| 114 |
+
### SLM-Lab
|
| 115 |
+
- **Loss**: MSE with `val_loss_coef=0.5`
|
| 116 |
+
- **Value clipping**: Optional via `clip_vloss` (default False)
|
| 117 |
+
- **Value target normalization**: Optional via `normalize_v_targets: true` using `ReturnNormalizer`
|
| 118 |
+
- **Architecture**: `[256, 256, 256]` with SiLU (3 layers vs Brax's 5)
|
| 119 |
+
|
| 120 |
+
### Difference
|
| 121 |
+
1. **Value network depth**: Brax uses **5 layers of 256** for DM Control, SLM-Lab uses **3 layers of 256**. This is a meaningful capacity difference for the value function, which needs to accurately estimate returns.
|
| 122 |
+
|
| 123 |
+
2. **Value target normalization**: SLM-Lab has `normalize_v_targets: true` with a `ReturnNormalizer`. Brax does NOT normalize value targets. This could cause issues if the normalizer is poorly calibrated.
|
| 124 |
+
|
| 125 |
+
3. **Value network architecture (Loco)**: Brax uses `[256, 256, 256, 256, 256]` for loco too.
|
| 126 |
+
|
| 127 |
+
**Classification: IMPORTANT**
|
| 128 |
+
|
| 129 |
+
**Fix**:
|
| 130 |
+
- Consider increasing value network to 5 layers: `[256, 256, 256, 256, 256]` to match Brax.
|
| 131 |
+
- Consider disabling `normalize_v_targets` since Brax doesn't use it and `reward_scaling=10.0` already provides good gradient magnitudes.
|
| 132 |
+
- Risk of regressing: the return normalizer may be helping envs with high reward variance. Test with and without.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
## 5. Advantage Computation (GAE)
|
| 137 |
+
|
| 138 |
+
### Brax
|
| 139 |
+
`compute_gae` in `losses.py` (line 38–100):
|
| 140 |
+
- Standard GAE with `lambda_=0.95`, `discount=0.995` (DM Control)
|
| 141 |
+
- Computed over each unroll of `unroll_length` timesteps
|
| 142 |
+
- Uses `truncation` mask to handle episode boundaries within an unroll
|
| 143 |
+
- `normalize_advantage=True` (default): `advs = (advs - mean) / (std + 1e-8)` over the **entire batch**
|
| 144 |
+
- GAE is computed **inside the loss function**, once per SGD pass (recomputed each time with current value estimates? No — computed once with data from rollout, including stored baseline values)
|
| 145 |
+
|
| 146 |
+
### SLM-Lab
|
| 147 |
+
- GAE computed in `calc_gae_advs_v_targets` using `math_util.calc_gaes`
|
| 148 |
+
- Computed once before training epochs
|
| 149 |
+
- Advantage normalization: per-minibatch standardization in `calc_policy_loss`:
|
| 150 |
+
```python
|
| 151 |
+
advs = math_util.standardize(advs) # per minibatch
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
### Difference
|
| 155 |
+
1. **GAE horizon**: Brax computes GAE over 30-step unrolls. SLM-Lab also uses 30-step horizon. **Match**.
|
| 156 |
+
2. **Advantage normalization scope**: Brax normalizes over the **entire batch** (983K transitions). SLM-Lab normalizes **per minibatch** (4096 transitions). Per-minibatch normalization has more variance. However, both approaches are standard — SB3 also normalizes per-minibatch.
|
| 157 |
+
3. **Truncation handling**: Brax explicitly handles truncation with `truncation_mask` in GAE. SLM-Lab uses `terminateds` from the env wrapper, with truncation handled by gymnasium's auto-reset. These should be functionally equivalent.
|
| 158 |
+
|
| 159 |
+
**Classification: MINOR** — Approaches are different but both standard.
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
## 6. Learning Rate Schedule
|
| 164 |
+
|
| 165 |
+
### Brax
|
| 166 |
+
Default: `learning_rate_schedule=None` → **no schedule** (constant LR).
|
| 167 |
+
Optional: `ADAPTIVE_KL` schedule that adjusts LR based on KL divergence.
|
| 168 |
+
Base LR: `1e-3` (DM Control), `3e-4` (Locomotion).
|
| 169 |
+
|
| 170 |
+
### SLM-Lab
|
| 171 |
+
Uses `LinearToMin` scheduler:
|
| 172 |
+
```yaml
|
| 173 |
+
lr_scheduler_spec:
|
| 174 |
+
name: LinearToMin
|
| 175 |
+
frame: "${max_frame}"
|
| 176 |
+
min_factor: 0.033
|
| 177 |
+
```
|
| 178 |
+
This linearly decays LR from `1e-3` to `1e-3 * 0.033 = 3.3e-5` over training.
|
| 179 |
+
|
| 180 |
+
### Difference
|
| 181 |
+
**Brax uses constant LR. SLM-Lab decays LR by 30x over training.** This is a significant difference. Linear LR decay can help convergence in the final phase but can also hurt by reducing the LR too early for long training runs.
|
| 182 |
+
|
| 183 |
+
**Classification: IMPORTANT**
|
| 184 |
+
|
| 185 |
+
**Fix**: Consider removing or weakening the LR decay for playground envs:
|
| 186 |
+
- Option A: Set `min_factor: 1.0` (effectively constant LR) to match Brax
|
| 187 |
+
- Option B: Use a much gentler decay, e.g. `min_factor: 0.1` (10x instead of 30x)
|
| 188 |
+
- Risk: Some envs may benefit from the decay. Test both.
|
| 189 |
+
|
| 190 |
+
---
|
| 191 |
+
|
| 192 |
+
## 7. Entropy Coefficient
|
| 193 |
+
|
| 194 |
+
### Brax
|
| 195 |
+
**Fixed** (no decay):
|
| 196 |
+
- DM Control: `entropy_cost=1e-2`
|
| 197 |
+
- Locomotion: `entropy_cost=1e-2` (some overrides to `5e-3`)
|
| 198 |
+
- Manipulation: varies, typically `1e-2` or `2e-2`
|
| 199 |
+
|
| 200 |
+
### SLM-Lab
|
| 201 |
+
**Fixed** (no_decay):
|
| 202 |
+
```yaml
|
| 203 |
+
entropy_coef_spec:
|
| 204 |
+
name: no_decay
|
| 205 |
+
start_val: 0.01
|
| 206 |
+
```
|
| 207 |
+
|
| 208 |
+
### Difference
|
| 209 |
+
**Match**: Both use fixed `0.01`.
|
| 210 |
+
|
| 211 |
+
**Classification: MINOR** — Already matching.
|
| 212 |
+
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
## 8. Gradient Clipping
|
| 216 |
+
|
| 217 |
+
### Brax
|
| 218 |
+
`max_grad_norm` via `optax.clip_by_global_norm()`:
|
| 219 |
+
- DM Control default: **None** (no clipping!)
|
| 220 |
+
- Locomotion default: `1.0`
|
| 221 |
+
- Vision PPO and some manipulation: `1.0`
|
| 222 |
+
|
| 223 |
+
### SLM-Lab
|
| 224 |
+
`clip_grad_val: 1.0` — always clips gradients by global norm.
|
| 225 |
+
|
| 226 |
+
### Difference
|
| 227 |
+
**Brax does NOT clip gradients for DM Control by default.** SLM-Lab always clips at 1.0.
|
| 228 |
+
|
| 229 |
+
Gradient clipping can be overly conservative, preventing the optimizer from taking large useful steps when gradients are naturally large (e.g., early training with `reward_scaling=10.0`).
|
| 230 |
+
|
| 231 |
+
**Classification: IMPORTANT** — Could explain slow convergence on DM Control envs.
|
| 232 |
+
|
| 233 |
+
**Fix**: Remove gradient clipping for DM Control playground spec:
|
| 234 |
+
```yaml
|
| 235 |
+
clip_grad_val: null # match Brax DM Control default
|
| 236 |
+
```
|
| 237 |
+
Keep `clip_grad_val: 1.0` for locomotion spec. Risk: gradient explosions without clipping, but Brax demonstrates it works for DM Control.
|
| 238 |
+
|
| 239 |
+
---
|
| 240 |
+
|
| 241 |
+
## 9. Action Distribution
|
| 242 |
+
|
| 243 |
+
### Brax
|
| 244 |
+
Default: `NormalTanhDistribution` — samples from `Normal(loc, scale)` then applies `tanh` postprocessing.
|
| 245 |
+
- `param_size = 2 * action_size` (network outputs both mean and log_scale)
|
| 246 |
+
- Scale: `scale = (softplus(raw_scale) + 0.001) * 1.0` (min_std=0.001, var_scale=1)
|
| 247 |
+
- **State-dependent std**: The scale is output by the policy network (not a separate parameter)
|
| 248 |
+
- Uses `tanh` bijector with log-det-jacobian correction
|
| 249 |
+
|
| 250 |
+
### SLM-Lab
|
| 251 |
+
Default: `Normal(loc, scale)` without tanh.
|
| 252 |
+
- `log_std_init` creates a **state-independent** `nn.Parameter` for log_std
|
| 253 |
+
- Scale: `scale = clamp(log_std, -5, 0.5).exp()` → std range [0.0067, 1.648]
|
| 254 |
+
- **State-independent std** (when `log_std_init` is set)
|
| 255 |
+
|
| 256 |
+
### Difference
|
| 257 |
+
1. **Tanh squashing**: Brax applies `tanh` to bound actions to [-1, 1]. SLM-Lab does NOT. This is a fundamental architectural difference:
|
| 258 |
+
- With tanh: actions are bounded, log-prob includes jacobian correction
|
| 259 |
+
- Without tanh: actions can exceed env bounds, relying on env clipping
|
| 260 |
+
|
| 261 |
+
2. **State-dependent vs independent std**: Brax uses state-dependent std (network outputs it), SLM-Lab uses state-independent learnable parameter.
|
| 262 |
+
|
| 263 |
+
3. **Std parameterization**: Brax uses `softplus + 0.001` (min_std=0.001), SLM-Lab uses `clamp(log_std, -5, 0.5).exp()` with max std of 1.648.
|
| 264 |
+
|
| 265 |
+
4. **Max std cap**: SLM-Lab caps at exp(0.5)=1.648. Brax has no explicit cap (softplus can grow unbounded). However, Brax's `tanh` squashing means even large std doesn't produce out-of-range actions.
|
| 266 |
+
|
| 267 |
+
**Classification: IMPORTANT**
|
| 268 |
+
|
| 269 |
+
**Note**: For MuJoCo Playground where actions are already in [-1, 1] and the env wrapper has `PlaygroundVecEnv` with action space `Box(-1, 1)`, the `tanh` squashing may not be critical since the env naturally clips. But the log-prob correction matters for policy gradient quality.
|
| 270 |
+
|
| 271 |
+
**Fix**:
|
| 272 |
+
- The state-independent log_std is a reasonable simplification (CleanRL also uses it). Keep.
|
| 273 |
+
- The `max=0.5` clamp may be too restrictive. Consider increasing to `max=2.0` (CleanRL default) or removing the upper clamp entirely.
|
| 274 |
+
- Consider implementing tanh squashing as an option for playground envs.
|
| 275 |
+
|
| 276 |
+
---
|
| 277 |
+
|
| 278 |
+
## 10. Network Initialization
|
| 279 |
+
|
| 280 |
+
### Brax
|
| 281 |
+
Default: `lecun_uniform` for all layers (policy and value).
|
| 282 |
+
Activation: `swish` (= SiLU).
|
| 283 |
+
No special output layer initialization by default.
|
| 284 |
+
|
| 285 |
+
### SLM-Lab
|
| 286 |
+
Default: `orthogonal_` initialization.
|
| 287 |
+
Activation: SiLU (same as swish).
|
| 288 |
+
|
| 289 |
+
### Difference
|
| 290 |
+
- Brax uses `lecun_uniform`, SLM-Lab uses `orthogonal_`. Both are reasonable for swish/SiLU activations.
|
| 291 |
+
- `orthogonal_` tends to preserve gradient magnitudes across layers, which can be beneficial for deeper networks.
|
| 292 |
+
|
| 293 |
+
**Classification: MINOR** — Both are standard choices. `orthogonal_` may actually be slightly better for the 3-layer SLM-Lab network.
|
| 294 |
+
|
| 295 |
+
---
|
| 296 |
+
|
| 297 |
+
## 11. Network Architecture
|
| 298 |
+
|
| 299 |
+
### Brax (DM Control defaults)
|
| 300 |
+
- **Policy**: `(32, 32, 32, 32)` — 4 layers of 32, swish activation
|
| 301 |
+
- **Value**: `(256, 256, 256, 256, 256)` — 5 layers of 256, swish activation
|
| 302 |
+
|
| 303 |
+
### Brax (Locomotion defaults)
|
| 304 |
+
- **Policy**: `(128, 128, 128, 128)` — 4 layers of 128
|
| 305 |
+
- **Value**: `(256, 256, 256, 256, 256)` — 5 layers of 256
|
| 306 |
+
|
| 307 |
+
### SLM-Lab (ppo_playground)
|
| 308 |
+
- **Policy**: `(64, 64)` — 2 layers of 64, SiLU
|
| 309 |
+
- **Value**: `(256, 256, 256)` — 3 layers of 256, SiLU
|
| 310 |
+
|
| 311 |
+
### Difference
|
| 312 |
+
1. **Policy width**: SLM-Lab uses wider layers (64) but fewer (2 vs 4). Total params: ~similar for DM Control (4*32*32=4096 vs 2*64*64=8192). SLM-Lab's policy is actually larger per layer but shallower.
|
| 313 |
+
|
| 314 |
+
2. **Value depth**: 3 vs 5 layers. This is significant — the value function benefits from more depth to accurately represent complex return landscapes, especially for long-horizon tasks.
|
| 315 |
+
|
| 316 |
+
3. **DM Control policy**: Brax uses very small 32-wide networks. SLM-Lab's 64-wide may be slightly over-parameterized but shouldn't hurt.
|
| 317 |
+
|
| 318 |
+
**Classification: IMPORTANT** (mainly the value network depth)
|
| 319 |
+
|
| 320 |
+
**Fix**: Consider increasing value network to 5 layers to match Brax:
|
| 321 |
+
```yaml
|
| 322 |
+
_value_body: &value_body
|
| 323 |
+
modules:
|
| 324 |
+
body:
|
| 325 |
+
Sequential:
|
| 326 |
+
- LazyLinear: {out_features: 256}
|
| 327 |
+
- SiLU:
|
| 328 |
+
- LazyLinear: {out_features: 256}
|
| 329 |
+
- SiLU:
|
| 330 |
+
- LazyLinear: {out_features: 256}
|
| 331 |
+
- SiLU:
|
| 332 |
+
- LazyLinear: {out_features: 256}
|
| 333 |
+
- SiLU:
|
| 334 |
+
- LazyLinear: {out_features: 256}
|
| 335 |
+
- SiLU:
|
| 336 |
+
```
|
| 337 |
+
|
| 338 |
+
---
|
| 339 |
+
|
| 340 |
+
## 12. Clipping Epsilon
|
| 341 |
+
|
| 342 |
+
### Brax
|
| 343 |
+
Default: `clipping_epsilon=0.3` (in `train.py` line 206).
|
| 344 |
+
DM Control: not overridden → **0.3**.
|
| 345 |
+
Locomotion: some envs override to `0.2`.
|
| 346 |
+
|
| 347 |
+
### SLM-Lab
|
| 348 |
+
Default: `clip_eps=0.2` (in spec).
|
| 349 |
+
|
| 350 |
+
### Difference
|
| 351 |
+
Brax uses **0.3** while SLM-Lab uses **0.2**. This is notable — 0.3 allows larger policy updates per step, which can accelerate learning but risks instability. Given that Brax collects 16x more data per update (see #1), the larger clip epsilon is safe because the policy ratio variance is lower with more data.
|
| 352 |
+
|
| 353 |
+
**Classification: IMPORTANT** — Especially in combination with the batch size difference (#1).
|
| 354 |
+
|
| 355 |
+
**Fix**: Consider increasing to 0.3 for DM Control playground spec. However, this should only be done together with the batch size fix (#1), since larger clip epsilon with small batches risks instability.
|
| 356 |
+
|
| 357 |
+
---
|
| 358 |
+
|
| 359 |
+
## 13. Discount Factor
|
| 360 |
+
|
| 361 |
+
### Brax (DM Control)
|
| 362 |
+
Default: `discounting=0.995`
|
| 363 |
+
Overrides: BallInCup=0.95, FingerSpin=0.95
|
| 364 |
+
|
| 365 |
+
### Brax (Locomotion)
|
| 366 |
+
Default: `discounting=0.97`
|
| 367 |
+
Overrides: Go1Backflip=0.95
|
| 368 |
+
|
| 369 |
+
### SLM-Lab
|
| 370 |
+
DM Control: `gamma=0.995`
|
| 371 |
+
Locomotion: `gamma=0.97`
|
| 372 |
+
Overrides: FingerSpin=0.95
|
| 373 |
+
|
| 374 |
+
### Difference
|
| 375 |
+
**Match** for the main categories.
|
| 376 |
+
|
| 377 |
+
**Classification: MINOR** — Already matching.
|
| 378 |
+
|
| 379 |
+
---
|
| 380 |
+
|
| 381 |
+
## Summary: Priority-Ordered Fixes
|
| 382 |
+
|
| 383 |
+
### CRITICAL
|
| 384 |
+
|
| 385 |
+
| # | Issue | Brax Value | SLM-Lab Value | Fix |
|
| 386 |
+
|---|-------|-----------|--------------|-----|
|
| 387 |
+
| 1 | **Batch size (data per training step)** | 983K transitions (16 unrolls of 30) | 61K transitions (1 unroll of 30) | Increase `time_horizon` to 480, or implement multi-unroll collection |
|
| 388 |
+
|
| 389 |
+
### IMPORTANT
|
| 390 |
+
|
| 391 |
+
| # | Issue | Brax Value | SLM-Lab Value | Fix |
|
| 392 |
+
|---|-------|-----------|--------------|-----|
|
| 393 |
+
| 4 | **Value network depth** | 5 layers of 256 | 3 layers of 256 | Add 2 more hidden layers |
|
| 394 |
+
| 6 | **LR schedule** | Constant | Linear decay to 0.033x | Set `min_factor: 1.0` or weaken to 0.1 |
|
| 395 |
+
| 8 | **Gradient clipping (DM Control)** | None | 1.0 | Set `clip_grad_val: null` for DM Control |
|
| 396 |
+
| 9 | **Action std upper bound** | Softplus (unbounded) | exp(0.5)=1.65 | Increase max clamp from 0.5 to 2.0 |
|
| 397 |
+
| 11 | **Clipping epsilon** | 0.3 | 0.2 | Increase to 0.3 (only with larger batch) |
|
| 398 |
+
|
| 399 |
+
### MINOR (already matching or small effect)
|
| 400 |
+
|
| 401 |
+
| # | Issue | Status |
|
| 402 |
+
|---|-------|--------|
|
| 403 |
+
| 2 | Reward scaling | Match (10.0 for DM Control) |
|
| 404 |
+
| 3 | Obs normalization | Match (Welford running stats) |
|
| 405 |
+
| 5 | GAE computation | Match (lam=0.95, per-minibatch normalization) |
|
| 406 |
+
| 7 | Entropy coefficient | Match (0.01, fixed) |
|
| 407 |
+
| 10 | Network init | Minor difference (orthogonal vs lecun_uniform) |
|
| 408 |
+
| 13 | Discount factor | Match |
|
| 409 |
+
|
| 410 |
+
---
|
| 411 |
+
|
| 412 |
+
## Recommended Implementation Order
|
| 413 |
+
|
| 414 |
+
### Phase 1: Low-risk spec changes (test on CartpoleBalance/Swingup first)
|
| 415 |
+
1. Remove gradient clipping for DM Control: `clip_grad_val: null`
|
| 416 |
+
2. Weaken LR decay: `min_factor: 0.1` (or `1.0` for constant)
|
| 417 |
+
3. Increase log_std clamp from 0.5 to 2.0
|
| 418 |
+
|
| 419 |
+
### Phase 2: Architecture changes (test on several envs)
|
| 420 |
+
4. Increase value network to 5 layers of 256
|
| 421 |
+
5. Consider disabling `normalize_v_targets` since Brax doesn't use it
|
| 422 |
+
|
| 423 |
+
### Phase 3: Batch size alignment (largest expected impact, highest risk)
|
| 424 |
+
6. Increase `time_horizon` to 240 or 480 to match Brax's effective batch size
|
| 425 |
+
7. If time_horizon increase works, consider increasing `clipping_epsilon` to 0.3
|
| 426 |
+
|
| 427 |
+
### Risk Assessment
|
| 428 |
+
- **Safest changes**: #1 (no grad clip), #2 (weaker LR decay), #3 (wider std range)
|
| 429 |
+
- **Medium risk**: #4 (deeper value net — more compute, could slow training), #5 (removing normalization)
|
| 430 |
+
- **Highest risk/reward**: #6 (larger time_horizon — 16x more memory, biggest expected improvement)
|
| 431 |
+
|
| 432 |
+
### Envs Already Solved
|
| 433 |
+
Changes should be tested against already-solved envs (CartpoleBalance, CartpoleSwingup, etc.) to ensure no regression. The safest approach is to implement spec variants rather than modifying the default spec.
|
| 434 |
+
|
| 435 |
+
---
|
| 436 |
+
|
| 437 |
+
## Key Insight
|
| 438 |
+
|
| 439 |
+
The single largest difference is **data collection volume per training step**. Brax collects 16x more transitions before each update cycle. This provides:
|
| 440 |
+
1. Better advantage estimates (longer trajectory context)
|
| 441 |
+
2. More diverse minibatches (less overfitting per update)
|
| 442 |
+
3. Safety for larger clip epsilon and no gradient clipping
|
| 443 |
+
|
| 444 |
+
Without matching this, the other improvements will have diminished returns. The multi-unroll collection in Brax is fundamentally tied to its JAX/vectorized architecture — SLM-Lab's sequential PyTorch loop can approximate this by simply increasing `time_horizon`, at the cost of memory.
|
| 445 |
+
|
| 446 |
+
A practical compromise: increase `time_horizon` from 30 to 128 or 256 (4-8x, not full 16x) and adjust other hyperparameters accordingly.
|
docs/phase5_spec_research.md
ADDED
|
@@ -0,0 +1,273 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Phase 5 Spec Research: Official vs SLM-Lab Config Comparison
|
| 2 |
+
|
| 3 |
+
## Source Files
|
| 4 |
+
|
| 5 |
+
- **Official config**: `mujoco_playground/config/dm_control_suite_params.py` ([GitHub](https://github.com/google-deepmind/mujoco_playground/blob/main/mujoco_playground/config/dm_control_suite_params.py))
|
| 6 |
+
- **Official network**: Brax PPO defaults (`brax/training/agents/ppo/networks.py`)
|
| 7 |
+
- **Our spec**: `slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml`
|
| 8 |
+
- **Our wrapper**: `slm_lab/env/playground.py`
|
| 9 |
+
|
| 10 |
+
## Critical Architectural Difference: Batch Collection Size
|
| 11 |
+
|
| 12 |
+
The most significant difference is how much data is collected per update cycle.
|
| 13 |
+
|
| 14 |
+
### Official Brax PPO batch mechanics
|
| 15 |
+
|
| 16 |
+
In Brax PPO, `batch_size` means **minibatch size in trajectories** (not total batch):
|
| 17 |
+
|
| 18 |
+
| Parameter | Official Value |
|
| 19 |
+
|---|---|
|
| 20 |
+
| `num_envs` | 2048 |
|
| 21 |
+
| `unroll_length` | 30 |
|
| 22 |
+
| `batch_size` | 1024 (trajectories per minibatch) |
|
| 23 |
+
| `num_minibatches` | 32 |
|
| 24 |
+
| `num_updates_per_batch` | 16 (epochs) |
|
| 25 |
+
|
| 26 |
+
- Sequential unrolls per env = `batch_size * num_minibatches / num_envs` = 1024 * 32 / 2048 = **16**
|
| 27 |
+
- Total transitions collected = 2048 envs * 16 unrolls * 30 steps = **983,040**
|
| 28 |
+
- Each minibatch = 30,720 transitions
|
| 29 |
+
- Grad steps per update = 32 * 16 = **512**
|
| 30 |
+
|
| 31 |
+
### SLM-Lab batch mechanics
|
| 32 |
+
|
| 33 |
+
| Parameter | Our Value |
|
| 34 |
+
|---|---|
|
| 35 |
+
| `num_envs` | 2048 |
|
| 36 |
+
| `time_horizon` | 30 |
|
| 37 |
+
| `minibatch_size` | 2048 |
|
| 38 |
+
| `training_epoch` | 16 |
|
| 39 |
+
|
| 40 |
+
- Total transitions collected = 2048 * 30 = **61,440**
|
| 41 |
+
- Num minibatches = 61,440 / 2048 = **30**
|
| 42 |
+
- Each minibatch = 2,048 transitions
|
| 43 |
+
- Grad steps per update = 30 * 16 = **480**
|
| 44 |
+
|
| 45 |
+
### Comparison
|
| 46 |
+
|
| 47 |
+
| Metric | Official | SLM-Lab | Ratio |
|
| 48 |
+
|---|---|---|---|
|
| 49 |
+
| Transitions per update | 983,040 | 61,440 | **16x more in official** |
|
| 50 |
+
| Minibatch size (transitions) | 30,720 | 2,048 | **15x more in official** |
|
| 51 |
+
| Grad steps per update | 512 | 480 | ~same |
|
| 52 |
+
| Data reuse (epochs over same data) | 16 | 16 | same |
|
| 53 |
+
|
| 54 |
+
**Impact**: Official collects 16x more data before each gradient update cycle. Each minibatch is 15x larger. The grad steps are similar, but each gradient step in official sees 15x more transitions — better gradient estimates, less variance.
|
| 55 |
+
|
| 56 |
+
This is likely the **root cause** for most failures, especially hard exploration tasks (FingerTurn, CartpoleSwingupSparse).
|
| 57 |
+
|
| 58 |
+
## Additional Missing Feature: reward_scaling=10.0
|
| 59 |
+
|
| 60 |
+
The official config uses `reward_scaling=10.0`. SLM-Lab has **no reward scaling** (implicitly 1.0). This amplifies reward signal by 10x, which:
|
| 61 |
+
- Helps with sparse/small rewards (CartpoleSwingupSparse, AcrobotSwingup)
|
| 62 |
+
- Works in conjunction with value target normalization
|
| 63 |
+
- May partially compensate for the batch size difference
|
| 64 |
+
|
| 65 |
+
## Network Architecture
|
| 66 |
+
|
| 67 |
+
| Component | Official (Brax) | SLM-Lab | Match? |
|
| 68 |
+
|---|---|---|---|
|
| 69 |
+
| Policy layers | (32, 32, 32, 32) | (64, 64) | Different shape, similar param count |
|
| 70 |
+
| Value layers | (256, 256, 256, 256, 256) | (256, 256, 256) | Official deeper |
|
| 71 |
+
| Activation | Swish (SiLU) | SiLU | Same |
|
| 72 |
+
| Init | default (lecun_uniform) | orthogonal_ | Different |
|
| 73 |
+
|
| 74 |
+
The policy architectures have similar total parameters (32*32*4 vs 64*64*2 chains are comparable). The value network is 2 layers shallower in SLM-Lab. Unlikely to be the primary cause of failures but could matter for harder tasks.
|
| 75 |
+
|
| 76 |
+
## Per-Environment Analysis
|
| 77 |
+
|
| 78 |
+
### Env: FingerTurnEasy (570 vs 950 target)
|
| 79 |
+
|
| 80 |
+
| Parameter | Official | Ours | Mismatch? |
|
| 81 |
+
|---|---|---|---|
|
| 82 |
+
| gamma (discounting) | 0.995 | 0.995 | Match |
|
| 83 |
+
| training_epoch (num_updates_per_batch) | 16 | 16 | Match |
|
| 84 |
+
| time_horizon (unroll_length) | 30 | 30 | Match |
|
| 85 |
+
| action_repeat | 1 | 1 | Match |
|
| 86 |
+
| num_envs | 2048 | 2048 | Match |
|
| 87 |
+
| reward_scaling | 10.0 | 1.0 (none) | **MISMATCH** |
|
| 88 |
+
| batch collection size | 983K | 61K | **MISMATCH (16x)** |
|
| 89 |
+
| minibatch transitions | 30,720 | 2,048 | **MISMATCH (15x)** |
|
| 90 |
+
|
| 91 |
+
**Per-env overrides**: None in official. Uses all defaults.
|
| 92 |
+
**Diagnosis**: Huge gap (570 vs 950). FingerTurn is a precision manipulation task requiring coordinated finger-tip control. The 16x smaller batch likely causes high gradient variance, preventing the policy from learning fine-grained coordination. reward_scaling=10 would also help.
|
| 93 |
+
|
| 94 |
+
### Env: FingerTurnHard (~500 vs 950 target)
|
| 95 |
+
|
| 96 |
+
Same as FingerTurnEasy — no per-env overrides. Same mismatches apply.
|
| 97 |
+
**Diagnosis**: Even harder version, same root cause. Needs larger batches and reward scaling.
|
| 98 |
+
|
| 99 |
+
### Env: CartpoleSwingup (443 vs 800 target, regression from p5-ppo5=803)
|
| 100 |
+
|
| 101 |
+
| Parameter | Official | p5-ppo5 | p5-ppo6 (current) |
|
| 102 |
+
|---|---|---|---|
|
| 103 |
+
| minibatch_size | N/A (30,720 transitions) | 4096 | 2048 |
|
| 104 |
+
| num_minibatches | 32 | 15 | 30 |
|
| 105 |
+
| grad steps/update | 512 | 240 | 480 |
|
| 106 |
+
| total transitions/update | 983K | 61K | 61K |
|
| 107 |
+
| reward_scaling | 10.0 | 1.0 | 1.0 |
|
| 108 |
+
|
| 109 |
+
**Per-env overrides**: None in official.
|
| 110 |
+
**Diagnosis**: The p5-ppo5→p5-ppo6 regression (803→443) came from doubling grad steps (240→480) while halving minibatch size (4096→2048). More gradient steps on smaller minibatches = overfitting per update. p5-ppo5's 15 larger minibatches were better for CartpoleSwingup.
|
| 111 |
+
|
| 112 |
+
**Answer to key question**: Yes, reverting to minibatch_size=4096 would likely restore CartpoleSwingup performance. However, the deeper fix is the batch collection size — both p5-ppo5 and p5-ppo6 collect only 61K transitions vs official's 983K.
|
| 113 |
+
|
| 114 |
+
### Env: CartpoleSwingupSparse (270 vs 425 target)
|
| 115 |
+
|
| 116 |
+
| Parameter | Official | Ours | Mismatch? |
|
| 117 |
+
|---|---|---|---|
|
| 118 |
+
| All params | Same defaults | Same as ppo_playground | Same mismatches |
|
| 119 |
+
| reward_scaling | 10.0 | 1.0 | **MISMATCH — critical for sparse** |
|
| 120 |
+
|
| 121 |
+
**Per-env overrides**: None in official.
|
| 122 |
+
**Diagnosis**: Sparse reward + no reward scaling = very weak learning signal. reward_scaling=10 is especially important here. The small batch also hurts exploration diversity.
|
| 123 |
+
|
| 124 |
+
### Env: CartpoleBalanceSparse (545 vs 700 target)
|
| 125 |
+
|
| 126 |
+
Same mismatches as other Cartpole variants. No per-env overrides.
|
| 127 |
+
**Diagnosis**: Note that the actual final MA is 992 (well above target). The low "strength" score (545) reflects slow initial convergence, not inability to solve. If metric switches to final_strength, this may already pass. reward_scaling would accelerate early convergence.
|
| 128 |
+
|
| 129 |
+
### Env: AcrobotSwingup (172 vs 220 target)
|
| 130 |
+
|
| 131 |
+
| Parameter | Official | Ours | Mismatch? |
|
| 132 |
+
|---|---|---|---|
|
| 133 |
+
| num_timesteps | 100M | 100M | Match (official has explicit override) |
|
| 134 |
+
| All training params | Defaults | ppo_playground | Same mismatches |
|
| 135 |
+
| reward_scaling | 10.0 | 1.0 | **MISMATCH** |
|
| 136 |
+
|
| 137 |
+
**Per-env overrides**: Official only sets `num_timesteps=100M` (already matched).
|
| 138 |
+
**Diagnosis**: Close to target (172 vs 220). reward_scaling=10 would likely close the gap. The final MA (253) exceeds target — metric issue compounds this.
|
| 139 |
+
|
| 140 |
+
### Env: SwimmerSwimmer6 (485 vs 560 target)
|
| 141 |
+
|
| 142 |
+
| Parameter | Official | Ours | Mismatch? |
|
| 143 |
+
|---|---|---|---|
|
| 144 |
+
| num_timesteps | 100M | 100M | Match (official has explicit override) |
|
| 145 |
+
| All training params | Defaults | ppo_playground | Same mismatches |
|
| 146 |
+
| reward_scaling | 10.0 | 1.0 | **MISMATCH** |
|
| 147 |
+
|
| 148 |
+
**Per-env overrides**: Official only sets `num_timesteps=100M` (already matched).
|
| 149 |
+
**Diagnosis**: Swimmer is a multi-joint locomotion task that benefits from larger batches (more diverse body configurations per update). reward_scaling would also help.
|
| 150 |
+
|
| 151 |
+
### Env: PointMass (863 vs 900 target)
|
| 152 |
+
|
| 153 |
+
No per-env overrides. Same mismatches.
|
| 154 |
+
**Diagnosis**: Very close (863 vs 900). This might pass with reward_scaling alone. Simple task — batch size less critical.
|
| 155 |
+
|
| 156 |
+
### Env: FishSwim (~530 vs 650 target, may still be running)
|
| 157 |
+
|
| 158 |
+
No per-env overrides. Same mismatches.
|
| 159 |
+
**Diagnosis**: 3D swimming task. Would benefit from both larger batches and reward_scaling.
|
| 160 |
+
|
| 161 |
+
## Summary of Mismatches (All Envs)
|
| 162 |
+
|
| 163 |
+
| Mismatch | Official | SLM-Lab | Impact | Fixable? |
|
| 164 |
+
|---|---|---|---|---|
|
| 165 |
+
| **Batch collection size** | 983K transitions | 61K transitions | HIGH — 16x less data per update | Requires architectural change to collect multiple unrolls |
|
| 166 |
+
| **Minibatch size** | 30,720 transitions | 2,048 transitions | HIGH — much noisier gradients | Limited by venv_pack constraint |
|
| 167 |
+
| **reward_scaling** | 10.0 | 1.0 (none) | MEDIUM-HIGH — especially for sparse envs | Easy to add |
|
| 168 |
+
| **Value network depth** | 5 layers | 3 layers | LOW-MEDIUM | Easy to change in spec |
|
| 169 |
+
| **Weight init** | lecun_uniform | orthogonal_ | LOW | Unlikely to matter much |
|
| 170 |
+
|
| 171 |
+
## Proposed Fixes
|
| 172 |
+
|
| 173 |
+
### Fix 1: Add reward_scaling (EASY, HIGH IMPACT)
|
| 174 |
+
|
| 175 |
+
Add a `reward_scale` parameter to the spec and apply it in the training loop or environment wrapper.
|
| 176 |
+
|
| 177 |
+
```yaml
|
| 178 |
+
# In ppo_playground spec
|
| 179 |
+
env:
|
| 180 |
+
reward_scale: 10.0 # Official mujoco_playground default
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
This requires a code change to support `reward_scale` in the env or algorithm. Simplest approach: multiply rewards by scale factor in the PlaygroundVecEnv wrapper.
|
| 184 |
+
|
| 185 |
+
**Priority: 1 (do this first)** — Easy to implement, likely closes the gap for PointMass, AcrobotSwingup, and CartpoleBalanceSparse. Partial improvement for others.
|
| 186 |
+
|
| 187 |
+
### Fix 2: Revert minibatch_size to 4096 for base ppo_playground (EASY)
|
| 188 |
+
|
| 189 |
+
```yaml
|
| 190 |
+
ppo_playground:
|
| 191 |
+
agent:
|
| 192 |
+
algorithm:
|
| 193 |
+
minibatch_size: 4096 # 15 minibatches, fewer but larger grad steps
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
**Priority: 2** — Immediately restores CartpoleSwingup from 443 to ~803. May modestly improve other envs. The trade-off: fewer grad steps (240 vs 480) but larger minibatches = more stable gradients.
|
| 197 |
+
|
| 198 |
+
### Fix 3: Multi-unroll collection (MEDIUM DIFFICULTY, HIGHEST IMPACT)
|
| 199 |
+
|
| 200 |
+
The fundamental gap is that SLM-Lab collects only 1 unroll (30 steps) from each env before updating, while Brax collects 16 sequential unrolls (480 steps). To match official:
|
| 201 |
+
|
| 202 |
+
Option A: Increase `time_horizon` to 480 (= 30 * 16). This collects the same total data but changes GAE computation (advantages computed over 480 steps instead of 30). Not equivalent to official.
|
| 203 |
+
|
| 204 |
+
Option B: Add a `num_unrolls` parameter that collects multiple independent unrolls of `time_horizon` length before updating. This matches official behavior but requires a code change to the training loop.
|
| 205 |
+
|
| 206 |
+
Option C: Accept the batch size difference and compensate with reward_scaling + larger minibatch_size. Less optimal but no code changes needed beyond reward_scaling.
|
| 207 |
+
|
| 208 |
+
**Priority: 3** — Biggest potential impact but requires code changes. Try fixes 1-2 first and re-evaluate.
|
| 209 |
+
|
| 210 |
+
### Fix 4: Deepen value network (EASY)
|
| 211 |
+
|
| 212 |
+
```yaml
|
| 213 |
+
_value_body: &value_body
|
| 214 |
+
modules:
|
| 215 |
+
body:
|
| 216 |
+
Sequential:
|
| 217 |
+
- LazyLinear: {out_features: 256}
|
| 218 |
+
- SiLU:
|
| 219 |
+
- LazyLinear: {out_features: 256}
|
| 220 |
+
- SiLU:
|
| 221 |
+
- LazyLinear: {out_features: 256}
|
| 222 |
+
- SiLU:
|
| 223 |
+
- LazyLinear: {out_features: 256}
|
| 224 |
+
- SiLU:
|
| 225 |
+
- LazyLinear: {out_features: 256}
|
| 226 |
+
- SiLU:
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
**Priority: 4** — Minor impact expected. Try after fixes 1-2.
|
| 230 |
+
|
| 231 |
+
### Fix 5: Per-env spec variants for FingerTurn (if fixes 1-2 insufficient)
|
| 232 |
+
|
| 233 |
+
If FingerTurn still fails after reward_scaling + minibatch revert, create a dedicated variant with tuned hyperparameters (possibly lower gamma, different lr). But try the general fixes first since official uses default params for FingerTurn.
|
| 234 |
+
|
| 235 |
+
**Priority: 5** — Only if fixes 1-3 don't close the gap.
|
| 236 |
+
|
| 237 |
+
## Recommended Action Plan
|
| 238 |
+
|
| 239 |
+
1. **Implement reward_scale=10.0** in PlaygroundVecEnv (multiply rewards by scale factor). Add `reward_scale` to env spec. One-line code change + spec update.
|
| 240 |
+
|
| 241 |
+
2. **Revert minibatch_size to 4096** in ppo_playground base spec. This gives 15 minibatches * 16 epochs = 240 grad steps (vs 480 now).
|
| 242 |
+
|
| 243 |
+
3. **Rerun the 5 worst-performing envs** with fixes 1+2:
|
| 244 |
+
- FingerTurnEasy (570 → target 950)
|
| 245 |
+
- FingerTurnHard (500 → target 950)
|
| 246 |
+
- CartpoleSwingup (443 → target 800)
|
| 247 |
+
- CartpoleSwingupSparse (270 → target 425)
|
| 248 |
+
- FishSwim (530 → target 650)
|
| 249 |
+
|
| 250 |
+
4. **Evaluate results**. If FingerTurn still fails badly, investigate multi-unroll collection (Fix 3) or FingerTurn-specific tuning.
|
| 251 |
+
|
| 252 |
+
5. **Metric decision**: Switch to `final_strength` for score reporting. CartpoleBalanceSparse (final MA=992) and AcrobotSwingup (final MA=253) likely pass under the correct metric.
|
| 253 |
+
|
| 254 |
+
## Envs Likely Fixed by Metric Change Alone
|
| 255 |
+
|
| 256 |
+
These envs have final MA above target but low "strength" due to slow early convergence:
|
| 257 |
+
|
| 258 |
+
| Env | strength | final MA | target | Passes with final_strength? |
|
| 259 |
+
|---|---|---|---|---|
|
| 260 |
+
| CartpoleBalanceSparse | 545 | 992 | 700 | YES |
|
| 261 |
+
| AcrobotSwingup | 172 | 253 | 220 | YES |
|
| 262 |
+
|
| 263 |
+
## Envs Requiring Spec Changes
|
| 264 |
+
|
| 265 |
+
| Env | Current | Target | Most likely fix |
|
| 266 |
+
|---|---|---|---|
|
| 267 |
+
| FingerTurnEasy | 570 | 950 | reward_scale + larger batch |
|
| 268 |
+
| FingerTurnHard | 500 | 950 | reward_scale + larger batch |
|
| 269 |
+
| CartpoleSwingup | 443 | 800 | Revert minibatch_size=4096 |
|
| 270 |
+
| CartpoleSwingupSparse | 270 | 425 | reward_scale |
|
| 271 |
+
| SwimmerSwimmer6 | 485 | 560 | reward_scale |
|
| 272 |
+
| PointMass | 863 | 900 | reward_scale |
|
| 273 |
+
| FishSwim | 530 | 650 | reward_scale + larger batch |
|
docs/plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/G1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
|
Git LFS Details
|
docs/plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
|
Git LFS Details
|
docs/plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|
docs/plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png
ADDED
|
Git LFS Details
|