diff --git a/docs/BENCHMARKS.md b/docs/BENCHMARKS.md index 211759bf84aaaa39af3af93bcf7862d7a9047b6a..75179502a7b7acfc3723097e47c58fd4f8037e04 100644 --- a/docs/BENCHMARKS.md +++ b/docs/BENCHMARKS.md @@ -110,11 +110,12 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20 | Phase | Category | Envs | REINFORCE | SARSA | DQN | DDQN+PER | A2C | PPO | SAC | CrossQ | Overall | |-------|----------|------|-----------|-------|-----|----------|-----|-----|-----|--------|---------| | 1 | Classic Control | 3 | ✅ | ✅ | ⚠️ | ✅ | ✅ | ✅ | ✅ | ⚠️ | Done | -| 2 | Box2D | 2 | N/A | N/A | ⚠️ | ✅ | ❌ | ⚠️ | ⚠️ | ⚠️ | Done | +| 2 | Box2D | 2 | N/A | N/A | ⚠️ | ✅ | | ⚠️ | ⚠️ | ⚠️ | Done | | 3 | MuJoCo | 11 | N/A | N/A | N/A | N/A | N/A | ⚠️ | ⚠️ | ⚠️ | Done | -| 4 | Atari | 57 | N/A | N/A | N/A | Skip | Done | Done | Done | ❌ | Done | +| 4 | Atari | 57 | N/A | N/A | N/A | Skip | Done | Done | Done | | Done | +| 5 | Playground | 54 | N/A | N/A | N/A | N/A | N/A | 🔄 | 🔄 | N/A | In progress | -**Legend**: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | ❌ Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable +**Legend**: ✅ Solved | ⚠️ Close (>80%) | 📊 Acceptable | Failed | 🔄 In progress/Pending | Skip Not started | N/A Not applicable --- @@ -137,7 +138,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20 | A2C | ✅ | 496.68 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_cartpole_arc | [a2c_gae_cartpole_arc_2026_02_11_142531](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_cartpole_arc_2026_02_11_142531) | | PPO | ✅ | 498.94 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_cartpole_arc | [ppo_cartpole_arc_2026_02_11_144029](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_cartpole_arc_2026_02_11_144029) | | SAC | ✅ | 406.09 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_cartpole_arc | [sac_cartpole_arc_2026_02_11_144155](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_cartpole_arc_2026_02_11_144155) | -| CrossQ | ⚠️ | 324.10 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_cartpole | [crossq_cartpole_2026_03_02_100434](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_cartpole_2026_03_02_100434) | +| CrossQ | ⚠️ | 334.59 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_cartpole | [crossq_cartpole_2026_03_02_100434](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_cartpole_2026_03_02_100434) | ![CartPole-v1](plots/CartPole-v1_multi_trial_graph_mean_returns_ma_vs_frames.png) @@ -166,7 +167,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20 | Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data | |-----------|--------|-----|-----------|-----------|---------| -| A2C | ❌ | -820.74 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_pendulum_arc | [a2c_gae_pendulum_arc_2026_02_11_162217](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_arc_2026_02_11_162217) | +| A2C | | -820.74 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_pendulum_arc | [a2c_gae_pendulum_arc_2026_02_11_162217](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_pendulum_arc_2026_02_11_162217) | | PPO | ✅ | -174.87 | [slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_classic_arc.yaml) | ppo_pendulum_arc | [ppo_pendulum_arc_2026_02_11_162156](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_pendulum_arc_2026_02_11_162156) | | SAC | ✅ | -150.97 | [slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_classic_arc.yaml) | sac_pendulum_arc | [sac_pendulum_arc_2026_02_11_162240](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_pendulum_arc_2026_02_11_162240) | | CrossQ | ✅ | -145.66 | [slm_lab/spec/benchmark/crossq/crossq_classic.yaml](../slm_lab/spec/benchmark/crossq/crossq_classic.yaml) | crossq_pendulum | [crossq_pendulum_2026_02_28_130648](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_pendulum_2026_02_28_130648) | @@ -185,10 +186,10 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20 |-----------|--------|-----|-----------|-----------|---------| | DQN | ⚠️ | 195.21 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | dqn_concat_lunar_arc | [dqn_concat_lunar_arc_2026_02_11_201407](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/dqn_concat_lunar_arc_2026_02_11_201407) | | DDQN+PER | ✅ | 265.90 | [slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/dqn/dqn_box2d_arc.yaml) | ddqn_per_concat_lunar_arc | [ddqn_per_concat_lunar_arc_2026_02_13_105115](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ddqn_per_concat_lunar_arc_2026_02_13_105115) | -| A2C | ❌ | 27.38 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_arc | [a2c_gae_lunar_arc_2026_02_11_224304](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_arc_2026_02_11_224304) | +| A2C | | 27.38 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_arc | [a2c_gae_lunar_arc_2026_02_11_224304](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_arc_2026_02_11_224304) | | PPO | ⚠️ | 183.30 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_arc | [ppo_lunar_arc_2026_02_11_201303](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_arc_2026_02_11_201303) | | SAC | ⚠️ | 106.17 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_arc | [sac_lunar_arc_2026_02_11_201417](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_arc_2026_02_11_201417) | -| CrossQ | ❌ | 139.21 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar | [crossq_lunar_2026_02_28_130733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_2026_02_28_130733) | +| CrossQ | | 139.21 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar | [crossq_lunar_2026_02_28_130733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_2026_02_28_130733) | ![LunarLander-v3](plots/LunarLander-v3_multi_trial_graph_mean_returns_ma_vs_frames.png) @@ -200,7 +201,7 @@ Search budget: ~3-4 trials per dimension (8 trials = 2-3 dims, 16 = 3-4 dims, 20 | Algorithm | Status | MA | SPEC_FILE | SPEC_NAME | HF Data | |-----------|--------|-----|-----------|-----------|---------| -| A2C | ❌ | -76.81 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_continuous_arc | [a2c_gae_lunar_continuous_arc_2026_02_11_224301](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_continuous_arc_2026_02_11_224301) | +| A2C | | -76.81 | [slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_classic_arc.yaml) | a2c_gae_lunar_continuous_arc | [a2c_gae_lunar_continuous_arc_2026_02_11_224301](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_lunar_continuous_arc_2026_02_11_224301) | | PPO | ⚠️ | 132.58 | [slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_box2d_arc.yaml) | ppo_lunar_continuous_arc | [ppo_lunar_continuous_arc_2026_02_11_224229](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_lunar_continuous_arc_2026_02_11_224229) | | SAC | ⚠️ | 125.00 | [slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_box2d_arc.yaml) | sac_lunar_continuous_arc | [sac_lunar_continuous_arc_2026_02_12_222203](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_lunar_continuous_arc_2026_02_12_222203) | | CrossQ | ✅ | 268.91 | [slm_lab/spec/benchmark/crossq/crossq_box2d.yaml](../slm_lab/spec/benchmark/crossq/crossq_box2d.yaml) | crossq_lunar_continuous | [crossq_lunar_continuous_2026_03_01_140517](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_lunar_continuous_2026_03_01_140517) | @@ -338,7 +339,7 @@ source .env && slm-lab run-remote --gpu \ |-----------|--------|-----|-----------|-----------|---------| | PPO | ✅ | 2661.26 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_mujoco_arc | [ppo_mujoco_arc_humanoid_2026_02_12_185439](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_mujoco_arc_humanoid_2026_02_12_185439) | | SAC | ✅ | 1989.65 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_humanoid_arc | [sac_humanoid_arc_2026_02_12_020016](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_humanoid_arc_2026_02_12_020016) | -| CrossQ | ✅ | 1102.00 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_humanoid | [crossq_humanoid_2026_03_01_165208](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_humanoid_2026_03_01_165208) | +| CrossQ | ✅ | 1755.29 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_humanoid | [crossq_humanoid_2026_03_01_165208](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_humanoid_2026_03_01_165208) | ![Humanoid-v5](plots/Humanoid-v5_multi_trial_graph_mean_returns_ma_vs_frames.png) @@ -422,7 +423,7 @@ source .env && slm-lab run-remote --gpu \ |-----------|--------|-----|-----------|-----------|---------| | PPO | ✅ | 282.44 | [slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_mujoco_arc.yaml) | ppo_swimmer_arc | [ppo_swimmer_arc_swimmer_2026_02_12_100445](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_swimmer_arc_swimmer_2026_02_12_100445) | | SAC | ✅ | 301.34 | [slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_mujoco_arc.yaml) | sac_swimmer_arc | [sac_swimmer_arc_2026_02_12_054349](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_swimmer_arc_2026_02_12_054349) | -| CrossQ | ✅ | 221.12 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_swimmer | [crossq_swimmer_2026_02_21_134711](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_swimmer_2026_02_21_134711) | +| CrossQ | ✅ | 221.12 | [slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml](../slm_lab/spec/benchmark/crossq/crossq_mujoco.yaml) | crossq_swimmer | [crossq_swimmer_2026_02_21_184204](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_swimmer_2026_02_21_184204) | ![Swimmer-v5](plots/Swimmer-v5_multi_trial_graph_mean_returns_ma_vs_frames.png) @@ -455,7 +456,7 @@ source .env && slm-lab run-remote --gpu \ - **A2C**: [a2c_atari_arc.yaml](../slm_lab/spec/benchmark_arc/a2c/a2c_atari_arc.yaml) - RMSprop (lr=7e-4), training_frequency=32 - **PPO**: [ppo_atari_arc.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_atari_arc.yaml) - AdamW (lr=2.5e-4), minibatch=256, horizon=128, epochs=4, max_frame=10e6 - **SAC**: [sac_atari_arc.yaml](../slm_lab/spec/benchmark_arc/sac/sac_atari_arc.yaml) - Categorical SAC, AdamW (lr=3e-4), training_iter=3, training_frequency=4, max_frame=2e6 -- **CrossQ**: [crossq_atari.yaml](../slm_lab/spec/benchmark/crossq/crossq_atari.yaml) - Categorical CrossQ, AdamW (lr=1e-3), training_iter=3, training_frequency=4, max_frame=2e6 (experimental — limited results on 6 games) +- **CrossQ**: [crossq_atari.yaml](../slm_lab/spec/benchmark/crossq/crossq_atari.yaml) - Categorical CrossQ, Adam (lr=1e-3), training_iter=1, training_frequency=4, max_frame=2e6 (experimental — limited results on 6 games) **PPO Lambda Variants** (table shows best result per game): @@ -486,7 +487,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ > **Note**: HF Data links marked "-" indicate runs completed but not yet uploaded to HuggingFace. Scores are extracted from local trial_metrics. -| ENV | Score | SPEC_NAME | HF Data | +| ENV | MA | SPEC_NAME | HF Data | |-----|-------|-----------|---------| | ALE/AirRaid-v5 | 7042.84 | ppo_atari_arc | [ppo_atari_arc_airraid_2026_02_13_124015](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_airraid_2026_02_13_124015) | | | 1832.54 | sac_atari_arc | [sac_atari_arc_airraid_2026_02_17_104002](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_airraid_2026_02_17_104002) | @@ -530,7 +531,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ | ALE/Breakout-v5 | 326.47 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_breakout_2026_02_13_230455](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_breakout_2026_02_13_230455) | | | 20.23 | sac_atari_arc | [sac_atari_arc_breakout_2026_02_15_201235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_breakout_2026_02_15_201235) | | | 273 | a2c_gae_atari_arc | [a2c_gae_atari_breakout_2026_01_31_213610](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_breakout_2026_01_31_213610) | -| | ❌ 4.40 | crossq_atari | [crossq_atari_breakout_2026_02_25_030241](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_breakout_2026_02_25_030241) | +| | 4.40 | crossq_atari | [crossq_atari_breakout_2026_02_25_030241](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_breakout_2026_02_25_030241) | | ALE/Carnival-v5 | 3912.59 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_carnival_2026_02_13_230438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_carnival_2026_02_13_230438) | | | 3501.37 | sac_atari_arc | [sac_atari_arc_carnival_2026_02_17_105834](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_carnival_2026_02_17_105834) | | | 2170 | a2c_gae_atari_arc | [a2c_gae_atari_carnival_2026_02_01_082726](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_carnival_2026_02_01_082726) | @@ -594,7 +595,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ | ALE/MsPacman-v5 | 2330.74 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_mspacman_2026_02_14_102435](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_mspacman_2026_02_14_102435) | | | 1336.96 | sac_atari_arc | [sac_atari_arc_mspacman_2026_02_17_221523](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_mspacman_2026_02_17_221523) | | | 2110 | a2c_gae_atari_arc | [a2c_gae_atari_mspacman_2026_02_01_001100](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_mspacman_2026_02_01_001100) | -| | ❌ 327.79 | crossq_atari | [crossq_atari_mspacman_2026_02_23_171317](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_mspacman_2026_02_23_171317) | +| | 327.79 | crossq_atari | [crossq_atari_mspacman_2026_02_23_171317](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_mspacman_2026_02_23_171317) | | ALE/NameThisGame-v5 | 6879.23 | ppo_atari_arc | [ppo_atari_arc_namethisgame_2026_02_14_103319](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_namethisgame_2026_02_14_103319) | | | 3992.71 | sac_atari_arc | [sac_atari_arc_namethisgame_2026_02_17_220905](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_namethisgame_2026_02_17_220905) | | | 5412 | a2c_gae_atari_arc | [a2c_gae_atari_namethisgame_2026_02_01_132733](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_namethisgame_2026_02_01_132733) | @@ -604,14 +605,14 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ | ALE/Pong-v5 | 16.69 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_pong_2026_02_14_103722](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_pong_2026_02_14_103722) | | | 10.89 | sac_atari_arc | [sac_atari_arc_pong_2026_02_17_160429](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pong_2026_02_17_160429) | | | 10.17 | a2c_gae_atari_arc | [a2c_gae_atari_pong_2026_01_31_213635](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pong_2026_01_31_213635) | -| | ❌ -20.59 | crossq_atari | [crossq_atari_pong_2026_02_23_171158](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_pong_2026_02_23_171158) | +| | -20.59 | crossq_atari | [crossq_atari_pong_2026_02_23_171158](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_pong_2026_02_23_171158) | | ALE/Pooyan-v5 | 5308.66 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_pooyan_2026_02_14_114730](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_pooyan_2026_02_14_114730) | | | 2530.78 | sac_atari_arc | [sac_atari_arc_pooyan_2026_02_17_220346](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_pooyan_2026_02_17_220346) | | | 2997 | a2c_gae_atari_arc | [a2c_gae_atari_pooyan_2026_02_01_132748](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_pooyan_2026_02_01_132748) | | ALE/Qbert-v5 | 15460.48 | ppo_atari_arc | [ppo_atari_arc_qbert_2026_02_14_120409](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_qbert_2026_02_14_120409) | | | 3331.98 | sac_atari_arc | [sac_atari_arc_qbert_2026_02_17_223117](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_qbert_2026_02_17_223117) | | | 12619 | a2c_gae_atari_arc | [a2c_gae_atari_qbert_2026_01_31_213720](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_qbert_2026_01_31_213720) | -| | ❌ 3189.73 | crossq_atari | [crossq_atari_qbert_2026_02_25_030458](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_qbert_2026_02_25_030458) | +| | 3189.73 | crossq_atari | [crossq_atari_qbert_2026_02_25_030458](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_qbert_2026_02_25_030458) | | ALE/Riverraid-v5 | 9599.75 | ppo_atari_lam85_arc | [ppo_atari_lam85_arc_riverraid_2026_02_14_124700](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam85_arc_riverraid_2026_02_14_124700) | | | 4744.95 | sac_atari_arc | [sac_atari_arc_riverraid_2026_02_18_014310](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_riverraid_2026_02_18_014310) | | | 6558 | a2c_gae_atari_arc | [a2c_gae_atari_riverraid_2026_02_01_132507](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_riverraid_2026_02_01_132507) | @@ -624,7 +625,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ | ALE/Seaquest-v5 | 1775.14 | ppo_atari_arc | [ppo_atari_arc_seaquest_2026_02_11_095444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_seaquest_2026_02_11_095444) | | | 1565.44 | sac_atari_arc | [sac_atari_arc_seaquest_2026_02_18_020822](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_seaquest_2026_02_18_020822) | | | 850 | a2c_gae_atari_arc | [a2c_gae_atari_seaquest_2026_02_01_001001](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_seaquest_2026_02_01_001001) | -| | ❌ 234.63 | crossq_atari | [crossq_atari_seaquest_2026_02_25_030441](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_seaquest_2026_02_25_030441) | +| | 234.63 | crossq_atari | [crossq_atari_seaquest_2026_02_25_030441](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_seaquest_2026_02_25_030441) | | ALE/Skiing-v5 | -28217.28 | ppo_atari_arc | [ppo_atari_arc_skiing_2026_02_14_174807](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_skiing_2026_02_14_174807) | | | -17464.22 | sac_atari_arc | [sac_atari_arc_skiing_2026_02_18_024444](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_skiing_2026_02_18_024444) | | | -14235 | a2c_gae_atari_arc | [a2c_gae_atari_skiing_2026_02_01_132451](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_skiing_2026_02_01_132451) | @@ -634,7 +635,7 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ | ALE/SpaceInvaders-v5 | 892.49 | ppo_atari_arc | [ppo_atari_arc_spaceinvaders_2026_02_14_131114](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_arc_spaceinvaders_2026_02_14_131114) | | | 507.33 | sac_atari_arc | [sac_atari_arc_spaceinvaders_2026_02_18_033139](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_spaceinvaders_2026_02_18_033139) | | | 784 | a2c_gae_atari_arc | [a2c_gae_atari_spaceinvaders_2026_02_01_000950](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_spaceinvaders_2026_02_01_000950) | -| | ❌ 404.50 | crossq_atari | [crossq_atari_spaceinvaders_2026_02_25_030410](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_spaceinvaders_2026_02_25_030410) | +| | 404.50 | crossq_atari | [crossq_atari_spaceinvaders_2026_02_25_030410](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/crossq_atari_spaceinvaders_2026_02_25_030410) | | ALE/StarGunner-v5 | 49328.73 | ppo_atari_lam70_arc | [ppo_atari_lam70_arc_stargunner_2026_02_14_131149](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_atari_lam70_arc_stargunner_2026_02_14_131149) | | | 4295.97 | sac_atari_arc | [sac_atari_arc_stargunner_2026_02_18_033151](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/sac_atari_arc_stargunner_2026_02_18_033151) | | | 8665 | a2c_gae_atari_arc | [a2c_gae_atari_stargunner_2026_02_01_132406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/a2c_gae_atari_stargunner_2026_02_01_132406) | @@ -760,3 +761,123 @@ source .env && slm-lab run-remote --gpu -s env=ENV \ +--- + +### Phase 5: MuJoCo Playground (JAX/MJX GPU-Accelerated) + +[MuJoCo Playground](https://google-deepmind.github.io/mujoco_playground/) | Continuous state/action | MJWarp GPU backend + +**Settings**: max_frame 100M | num_envs 2048 | max_session 4 + +**Spec file**: [ppo_playground.yaml](../slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml) — all envs via `-s env=playground/ENV` + +**Reproduce**: +```bash +source .env && slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml SPEC_NAME train \ + -s env=playground/ENV -s max_frame=100000000 -n NAME +``` + +#### Phase 5.1: DM Control Suite (25 envs) + +Classic control and locomotion tasks from the DeepMind Control Suite, ported to MJWarp GPU simulation. + +| ENV | MA | SPEC_NAME | HF Data | +|-----|-----|-----------|---------| +| playground/AcrobotSwingup | 253.24 | ppo_playground_vnorm | [ppo_playground_acrobotswingup_2026_03_12_175809](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_acrobotswingup_2026_03_12_175809) | +| playground/AcrobotSwingupSparse | 146.98 | ppo_playground_vnorm | [ppo_playground_vnorm_acrobotswingupsparse_2026_03_14_161212](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_acrobotswingupsparse_2026_03_14_161212) | +| playground/BallInCup | 942.44 | ppo_playground_vnorm | [ppo_playground_ballincup_2026_03_12_105443](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_ballincup_2026_03_12_105443) | +| playground/CartpoleBalance | 968.23 | ppo_playground_vnorm | [ppo_playground_cartpolebalance_2026_03_12_141924](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_cartpolebalance_2026_03_12_141924) | +| playground/CartpoleBalanceSparse | 995.34 | ppo_playground_constlr | [ppo_playground_constlr_cartpolebalancesparse_2026_03_14_000352](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpolebalancesparse_2026_03_14_000352) | +| playground/CartpoleSwingup | 729.09 | ppo_playground_constlr | [ppo_playground_constlr_cartpoleswingup_2026_03_17_041102](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpoleswingup_2026_03_17_041102) | +| playground/CartpoleSwingupSparse | 521.98 | ppo_playground_constlr | [ppo_playground_constlr_cartpoleswingupsparse_2026_03_13_233449](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_constlr_cartpoleswingupsparse_2026_03_13_233449) | +| playground/CheetahRun | 883.44 | ppo_playground_vnorm | [ppo_playground_vnorm_cheetahrun_2026_03_14_161211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_cheetahrun_2026_03_14_161211) | +| playground/FingerSpin | 713.35 | ppo_playground_fingerspin | [ppo_playground_fingerspin_fingerspin_2026_03_13_033911](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_fingerspin_fingerspin_2026_03_13_033911) | +| playground/FingerTurnEasy | 663.58 | ppo_playground_vnorm | [ppo_playground_fingerturneasy_2026_03_12_175835](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_fingerturneasy_2026_03_12_175835) | +| playground/FingerTurnHard | 590.43 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_fingerturnhard_2026_03_16_234509](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_fingerturnhard_2026_03_16_234509) | +| playground/FishSwim | 580.57 | ppo_playground_vnorm_constlr_clip03 | [ppo_playground_vnorm_constlr_clip03_fishswim_2026_03_14_002112](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_clip03_fishswim_2026_03_14_002112) | +| playground/HopperHop | 22.00 | ppo_playground_vnorm | [ppo_playground_hopperhop_2026_03_12_110855](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_hopperhop_2026_03_12_110855) | +| playground/HopperStand | 237.15 | ppo_playground_vnorm | [ppo_playground_vnorm_hopperstand_2026_03_14_095438](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_hopperstand_2026_03_14_095438) | +| playground/HumanoidRun | 18.83 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidrun_2026_03_14_115522](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidrun_2026_03_14_115522) | +| playground/HumanoidStand | 114.86 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidstand_2026_03_14_115516](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidstand_2026_03_14_115516) | +| playground/HumanoidWalk | 47.01 | ppo_playground_humanoid | [ppo_playground_humanoid_humanoidwalk_2026_03_14_172235](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_humanoid_humanoidwalk_2026_03_14_172235) | +| playground/PendulumSwingup | 637.46 | ppo_playground_pendulum | [ppo_playground_pendulum_pendulumswingup_2026_03_13_033818](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_pendulum_pendulumswingup_2026_03_13_033818) | +| playground/PointMass | 868.09 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_pointmass_2026_03_14_095452](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_pointmass_2026_03_14_095452) | +| playground/ReacherEasy | 955.08 | ppo_playground_vnorm | [ppo_playground_reachereasy_2026_03_12_122115](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_reachereasy_2026_03_12_122115) | +| playground/ReacherHard | 946.99 | ppo_playground_vnorm | [ppo_playground_reacherhard_2026_03_12_123226](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_reacherhard_2026_03_12_123226) | +| playground/SwimmerSwimmer6 | 591.13 | ppo_playground_vnorm_constlr | [ppo_playground_vnorm_constlr_swimmerswimmer6_2026_03_14_000406](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_constlr_swimmerswimmer6_2026_03_14_000406) | +| playground/WalkerRun | 759.71 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerrun_2026_03_14_161354](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerrun_2026_03_14_161354) | +| playground/WalkerStand | 948.35 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerstand_2026_03_14_161415](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerstand_2026_03_14_161415) | +| playground/WalkerWalk | 945.31 | ppo_playground_vnorm | [ppo_playground_vnorm_walkerwalk_2026_03_14_161338](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_vnorm_walkerwalk_2026_03_14_161338) | + +| | | | +|---|---|---| +| ![AcrobotSwingup](plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![AcrobotSwingupSparse](plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![BallInCup](plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![CartpoleBalance](plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![CartpoleBalanceSparse](plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![CartpoleSwingup](plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![CartpoleSwingupSparse](plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![CheetahRun](plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![FingerSpin](plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![FingerTurnEasy](plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![FingerTurnHard](plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![FishSwim](plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![HopperHop](plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![HopperStand](plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![HumanoidRun](plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![HumanoidStand](plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![HumanoidWalk](plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![PendulumSwingup](plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![PointMass](plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![ReacherEasy](plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![ReacherHard](plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![SwimmerSwimmer6](plots/SwimmerSwimmer6_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![WalkerRun](plots/WalkerRun_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![WalkerStand](plots/WalkerStand_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![WalkerWalk](plots/WalkerWalk_multi_trial_graph_mean_returns_ma_vs_frames.png) | | | + +#### Phase 5.2: Locomotion Robots (19 envs) + +Real-world robot locomotion — quadrupeds (Go1, Spot, Barkour) and humanoids (H1, G1, T1, Op3, Apollo, BerkeleyHumanoid) on flat and rough terrain. + +| ENV | MA | SPEC_NAME | HF Data | +|-----|-----|-----------|---------| +| playground/ApolloJoystickFlatTerrain | 17.44 | ppo_playground_loco_precise | [ppo_playground_loco_precise_apollojoystickflatterrain_2026_03_14_210939](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_apollojoystickflatterrain_2026_03_14_210939) | +| playground/BarkourJoystick | 0.0 | ppo_playground_loco | [ppo_playground_loco_barkourjoystick_2026_03_14_194525](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_barkourjoystick_2026_03_14_194525) | +| playground/BerkeleyHumanoidJoystickFlatTerrain | 32.29 | ppo_playground_loco_precise | [ppo_playground_loco_precise_berkeleyhumanoidjoystickflatterrain_2026_03_14_213019](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_berkeleyhumanoidjoystickflatterrain_2026_03_14_213019) | +| playground/BerkeleyHumanoidJoystickRoughTerrain | 21.25 | ppo_playground_loco_precise | [ppo_playground_loco_precise_berkeleyhumanoidjoystickroughterrain_2026_03_15_150211](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_berkeleyhumanoidjoystickroughterrain_2026_03_15_150211) | +| playground/G1JoystickFlatTerrain | 1.85 | ppo_playground_loco_precise | [ppo_playground_loco_precise_g1joystickflatterrain_2026_03_15_150219](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_g1joystickflatterrain_2026_03_15_150219) | +| playground/G1JoystickRoughTerrain | -2.75 | ppo_playground_loco_precise | [ppo_playground_loco_precise_g1joystickroughterrain_2026_03_19_015137](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_g1joystickroughterrain_2026_03_19_015137) | +| playground/Go1Footstand | 23.48 | ppo_playground_loco_precise | [ppo_playground_loco_precise_go1footstand_2026_03_16_174009](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_go1footstand_2026_03_16_174009) | +| playground/Go1Getup | 18.16 | ppo_playground_loco_go1 | [ppo_playground_loco_go1_go1getup_2026_03_16_132801](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1_go1getup_2026_03_16_132801) | +| playground/Go1Handstand | 17.88 | ppo_playground_loco_precise | [ppo_playground_loco_precise_go1handstand_2026_03_16_155437](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_go1handstand_2026_03_16_155437) | +| playground/Go1JoystickFlatTerrain | 0.0 | ppo_playground_loco | [ppo_playground_loco_go1joystickflatterrain_2026_03_14_204658](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1joystickflatterrain_2026_03_14_204658) | +| playground/Go1JoystickRoughTerrain | 0.00 | ppo_playground_loco | [ppo_playground_loco_go1joystickroughterrain_2026_03_15_150321](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_go1joystickroughterrain_2026_03_15_150321) | +| playground/H1InplaceGaitTracking | 11.95 | ppo_playground_loco_precise | [ppo_playground_loco_precise_h1inplacegaittracking_2026_03_16_170327](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_h1inplacegaittracking_2026_03_16_170327) | +| playground/H1JoystickGaitTracking | 31.11 | ppo_playground_loco_precise | [ppo_playground_loco_precise_h1joystickgaittracking_2026_03_16_170412](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_h1joystickgaittracking_2026_03_16_170412) | +| playground/Op3Joystick | 0.00 | ppo_playground_loco | [ppo_playground_loco_op3joystick_2026_03_15_150120](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_op3joystick_2026_03_15_150120) | +| playground/SpotFlatTerrainJoystick | 48.58 | ppo_playground_loco_precise | [ppo_playground_loco_precise_spotflatterrainjoystick_2026_03_16_180747](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_spotflatterrainjoystick_2026_03_16_180747) | +| playground/SpotGetup | 19.39 | ppo_playground_loco | [ppo_playground_loco_spotgetup_2026_03_14_213703](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_spotgetup_2026_03_14_213703) | +| playground/SpotJoystickGaitTracking | 36.90 | ppo_playground_loco | [ppo_playground_loco_spotjoystickgaittracking_2026_03_19_015106](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_spotjoystickgaittracking_2026_03_19_015106) | +| playground/T1JoystickFlatTerrain | 13.42 | ppo_playground_loco_precise | [ppo_playground_loco_precise_t1joystickflatterrain_2026_03_14_220250](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_t1joystickflatterrain_2026_03_14_220250) | +| playground/T1JoystickRoughTerrain | 2.58 | ppo_playground_loco_precise | [ppo_playground_loco_precise_t1joystickroughterrain_2026_03_15_162332](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_precise_t1joystickroughterrain_2026_03_15_162332) | + +| | | | +|---|---|---| +| ![ApolloJoystickFlatTerrain](plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![BarkourJoystick](plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![BerkeleyHumanoidJoystickFlatTerrain](plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![G1JoystickFlatTerrain](plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![Go1Footstand](plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![Go1Handstand](plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![H1InplaceGaitTracking](plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![H1JoystickGaitTracking](plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![Op3Joystick](plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![SpotFlatTerrainJoystick](plots/SpotFlatTerrainJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![SpotGetup](plots/SpotGetup_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![SpotJoystickGaitTracking](plots/SpotJoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![BerkeleyHumanoidJoystickRoughTerrain](plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![Go1Getup](plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![Go1JoystickFlatTerrain](plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![Go1JoystickRoughTerrain](plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![T1JoystickFlatTerrain](plots/T1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![T1JoystickRoughTerrain](plots/T1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png) | + +#### Phase 5.3: Manipulation (10 envs) + +Robotic manipulation — Panda arm pick/place, Aloha bimanual, Leap dexterous hand, and AeroCube orientation tasks. + +| ENV | MA | SPEC_NAME | HF Data | +|-----|-----|-----------|---------| +| playground/AeroCubeRotateZAxis | -3.09 | ppo_playground_loco | [ppo_playground_loco_aerocuberotatezaxis_2026_03_20_012502](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_aerocuberotatezaxis_2026_03_20_012502) | +| playground/AlohaHandOver | 3.65 | ppo_playground_loco | [ppo_playground_loco_alohahandover_2026_03_15_023712](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_alohahandover_2026_03_15_023712) | +| playground/AlohaSinglePegInsertion | 220.93 | ppo_playground_manip_aloha_peg | [ppo_playground_manip_aloha_peg_alohasinglepeginsertion_2026_03_17_122613](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_manip_aloha_peg_alohasinglepeginsertion_2026_03_17_122613) | +| playground/LeapCubeReorient | 74.68 | ppo_playground_loco | [ppo_playground_loco_leapcubereorient_2026_03_15_150420](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_leapcubereorient_2026_03_15_150420) | +| playground/LeapCubeRotateZAxis | 91.65 | ppo_playground_loco | [ppo_playground_loco_leapcuberotatezaxis_2026_03_15_150334](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_leapcuberotatezaxis_2026_03_15_150334) | +| playground/PandaOpenCabinet | 11081.51 | ppo_playground_loco | [ppo_playground_loco_pandaopencabinet_2026_03_15_150318](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandaopencabinet_2026_03_15_150318) | +| playground/PandaPickCube | 4586.13 | ppo_playground_loco | [ppo_playground_loco_pandapickcube_2026_03_15_023744](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcube_2026_03_15_023744) | +| playground/PandaPickCubeCartesian | 10.58 | ppo_playground_loco | [ppo_playground_loco_pandapickcubecartesian_2026_03_15_023810](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcubecartesian_2026_03_15_023810) | +| playground/PandaPickCubeOrientation | 4281.66 | ppo_playground_loco | [ppo_playground_loco_pandapickcubeorientation_2026_03_19_015108](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandapickcubeorientation_2026_03_19_015108) | +| playground/PandaRobotiqPushCube | 1.31 | ppo_playground_loco | [ppo_playground_loco_pandarobotiqpushcube_2026_03_15_042131](https://huggingface.co/datasets/SLM-Lab/benchmark/tree/main/data/ppo_playground_loco_pandarobotiqpushcube_2026_03_15_042131) | + +| | | | +|---|---|---| +| ![AeroCubeRotateZAxis](plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![AlohaHandOver](plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![AlohaSinglePegInsertion](plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![LeapCubeReorient](plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![LeapCubeRotateZAxis](plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![PandaOpenCabinet](plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![PandaPickCube](plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![PandaPickCubeCartesian](plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png) | ![PandaPickCubeOrientation](plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png) | +| ![PandaRobotiqPushCube](plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png) | | | + diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index ee4067959dabdb043fd95e6baf2233b901a9f91a..0b7df2d3328c7c79193d3b98928af91f4eebb6d3 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -1,3 +1,18 @@ +# SLM-Lab v5.3.0 + +MuJoCo Playground integration. 54 GPU-accelerated environments via JAX/MJX backend. + +**What changed:** +- **New env backend**: MuJoCo Playground (DeepMind) — 25 DM Control Suite, 19 Locomotion (Go1, Spot, H1, G1), 10 Manipulation (Panda, ALOHA, LEAP) +- **PlaygroundVecEnv**: JAX-native vectorized env wrapper with `jax.vmap` batching and Brax auto-reset. Converts JAX arrays to numpy at the API boundary for PyTorch compatibility +- **Prefix routing**: `playground/EnvName` in specs routes to PlaygroundVecEnv instead of Gymnasium +- **Optional dependency**: `uv sync --group playground` installs `mujoco-playground`, `jax`, `brax` +- **Benchmark specs**: `slm_lab/spec/benchmark/playground/` — SAC specs for all 54 envs across 3 categories + + + +--- + # SLM-Lab v5.2.0 Training path performance optimization. **+15% SAC throughput on GPU**, verified with no score regression. diff --git a/docs/PHASE5_OPS.md b/docs/PHASE5_OPS.md new file mode 100644 index 0000000000000000000000000000000000000000..0606c2cf300fecd95b496b48f9c5e50f5f6020dc --- /dev/null +++ b/docs/PHASE5_OPS.md @@ -0,0 +1,650 @@ +# Phase 5.1 PPO — Operations Tracker + +Single source of truth for in-flight work. Resume from here. + +--- + +## Principles + +1. **Two canonical specs**: `ppo_playground` (DM Control) and `ppo_playground_loco` (Loco). Per-env variants only when officially required: `ppo_playground_fingerspin` (gamma=0.95), `ppo_playground_pendulum` (training_epoch=4, action_repeat=4 via code). +2. **100M frames hard cap** — no extended runs. If an env doesn't hit target at 100M, fix the spec. +3. **Strategic reruns**: only rerun failing/⚠️ envs. Already-✅ envs skip revalidation. +4. **Score metric**: use `total_reward_ma` (final moving average of total reward) — measures end-of-training performance and matches mujoco_playground reference scores. +5. **Official reference**: check `~/.cache/uv/archive-v0/ON8dY3irQZTYI3Bok0SlC/mujoco_playground/config/dm_control_suite_params.py` for per-env overrides. + +--- + +## Wave 3 (2026-03-16) + +**Fixes applied:** +- stderr suppression: MuJoCo C-level warnings (ccd_iterations, nefc overflow, broadphase overflow) silenced in playground.py +- obs fix: _get_obs now passes only "state" key for dict-obs envs (was incorrectly concatenating privileged_state+state) + +**Envs graduated to ✅ (close enough):** +FishSwim, PointMass, ReacherHard, WalkerStand, WalkerWalk, SpotGetup, SpotJoystickGaitTracking, AlohaHandOver + +**Failing envs by root cause:** +- Humanoid double-norm (rs10 fix): HumanoidStand (114→700), HumanoidWalk (47→500), HumanoidRun (18→130) +- Dict obs fix (now fixed): Go1Flat/Rough/Getup/Handstand, G1Flat/Rough, T1Flat/Rough +- Unknown: BarkourJoystick (0/35), Op3Joystick (0/20) +- Needs hparam work: H1Inplace (4→10), H1Joystick (16→30), SpotFlat (11→30) +- Manipulation: AlohaPeg (188→300), LeapCubeReorient (74→200) +- Infeasible: PandaRobotiqPushCube, AeroCubeRotateZAxis + +**Currently running:** (to be populated by ops) + +--- + +## Currently Running (as of 2026-03-14 ~00:00) + +**Wave V (p5-ppo17) — Constant LR test (4 runs, just launched)** + +Testing constant LR (Brax default) in isolation — never tested before. Key hypothesis: LR decay hurts late-converging envs. + +| Run | Env | Spec | Key Change | Old Best | Target | +|---|---|---|---|---|---| +| p5-ppo17-csup | CartpoleSwingup | constlr | constant LR + minibatch=4096 | 576.1 | 800 | +| p5-ppo17-csupsparse | CartpoleSwingupSparse | constlr | constant LR + minibatch=4096 | 296.3 | 425 | +| p5-ppo17-acrobot | AcrobotSwingup | vnorm_constlr | constant LR + vnorm | 173 | 220 | +| p5-ppo17-fteasy | FingerTurnEasy | vnorm_constlr | constant LR + vnorm | 571 | 950 | + +**Wave IV-H (p5-ppo16h) — Humanoid with wider policy (3 runs, ~2.5h remaining)** + +New `ppo_playground_humanoid` variant: 2×256 policy (vs 2×64), constant LR, vnorm=true. +Based on Phase 3 Gymnasium Humanoid-v5 success (2661 MA with 2×256 + constant LR). + +| Run | Env | Old Best | Target | +|---|---|---|---| +| p5-ppo16h-hstand | HumanoidStand | 18.36 | 700 | +| p5-ppo16h-hwalk | HumanoidWalk | 7.68 | 500 | +| p5-ppo16h-hrun | HumanoidRun | 3.19 | 130 | + +**Wave VI (p5-ppo18) — Brax 4×32 policy + constant LR + vnorm (3 runs, just launched)** + +Testing Brax default policy architecture (4 layers × 32 units vs our 2 × 64). +Deeper narrower policy may learn better features for precision tasks. + +| Run | Env | Old Best | Target | +|---|---|---|---| +| p5-ppo18-fteasy | FingerTurnEasy | 571 | 950 | +| p5-ppo18-fthard | FingerTurnHard | 484 | 950 | +| p5-ppo18-fishswim | FishSwim | 463 | 650 | + +**Wave IV tail (p5-ppo16) — completed** + +| Run | Env | strength | Target | New best? | +|---|---|---|---|---| +| p5-ppo16-swimmer6 | SwimmerSwimmer6 | 509.3 | 560 | ✅ New best (final_strength=560.6) | +| p5-ppo16-fishswim | FishSwim | 420.6 | 650 | ❌ Worse than 463 | + +**Wave IV results (p5-ppo16, vnorm=true rerun with reverted spec — completed):** + +All ran with vnorm=true. CartpoleSwingup/Sparse worse (vnorm=false is better for them — wrong setting). +Precision envs also scored below old bests. Humanoid still failing with standard 2×64 policy. + +| Env | p16 strength | Old Best | Target | Verdict | +|---|---|---|---|---| +| CartpoleSwingup | 316.2 | 576.1 (false) | 800 | ❌ wrong vnorm | +| CartpoleSwingupSparse | 288.7 | 296.3 (false) | 425 | ❌ wrong vnorm | +| AcrobotSwingup | 145.4 | 173 (true) | 220 | ❌ worse | +| FingerTurnEasy | 511.1 | 571 (true) | 950 | ❌ worse | +| FingerTurnHard | 368.6 | 484 (true) | 950 | ❌ worse | +| HumanoidStand | 12.72 | 18.36 | 700 | ❌ still failing | +| HumanoidWalk | 7.46 | 7.68 | 500 | ❌ still failing | +| HumanoidRun | 3.19 | 3.19 | 130 | ❌ still failing | + +**CONCLUSION**: Reverted spec didn't help. No new bests. Consistency was negative for CartpoleSwingup/Sparse (high variance). +Need constant LR test (Wave V) and wider policy for Humanoid (Wave IV-H). + +**Wave III results (p5-ppo13/p5-ppo15, 5-layer value + no grad clip — completed):** + +Only CartpoleSwingup improved slightly (623.8 vs 576.1). All others regressed. +FishSwim p5-ppo15: strength=411.6 (vs 463 old best). AcrobotSwingup p5-ppo15: strength=95.4 (vs 173). + +**CONCLUSION**: 5-layer value + no grad clip is NOT a general improvement. Reverted to 3-layer + clip_grad_val=1.0. + +**Wave H results (p5-ppo12, ALL completed — NONE improved over old bests):** +Re-running same spec (variance reruns + vnorm) didn't help. Run-to-run variance is high but +old bests represent lucky runs. Hyperparameter tuning has hit diminishing returns. + +**Wave G/G2 results (normalize_v_targets=false ablation, ALL completed):** + +| Env | p11 strength | Old Best (true) | Target | Change | Verdict | +|---|---|---|---|---|---| +| **PendulumSwingup** | **533.5** | 276 | 395 | +93% | **✅ NEW PASS** | +| **FingerSpin** | **652.4** | 561 | 600 | +16% | **✅ NEW PASS** | +| **CartpoleBalanceSparse** | **690.4** | 545 | 700 | +27% | **⚠️ 99% of target** | +| **CartpoleSwingup** | **576.1** | 443/506 | 800 | +30% | ⚠️ improved | +| **CartpoleSwingupSparse** | **296.3** | 271 | 425 | +9% | ⚠️ improved | +| PointMass | 854.4 | 863 | 900 | -1% | ⚠️ same | +| FishSwim | 293.9 | 463 | 650 | -36% | ❌ regression | +| FingerTurnEasy | 441.1 | 571 | 950 | -23% | ❌ regression | +| SwimmerSwimmer6 | 386.2 | 485 | 560 | -20% | ❌ regression | +| FingerTurnHard | 335.7 | 484 | 950 | -31% | ❌ regression | +| AcrobotSwingup | 105.1 | 173 | 220 | -39% | ❌ regression | +| HumanoidStand | 12.87 | 18.36 | 500 | -30% | ❌ still failing | + +**CONCLUSION**: `normalize_v_targets: false` helps 5/12, hurts 6/12, neutral 1/12. +- **false wins**: PendulumSwingup, FingerSpin, CartpoleBalanceSparse, CartpoleSwingup, CartpoleSwingupSparse +- **true wins**: FishSwim, FingerTurnEasy/Hard, SwimmerSwimmer6, AcrobotSwingup, PointMass +- **Decision**: Per-env spec selection. New `ppo_playground_vnorm` variant for precision envs. + +**Wave F results (multi-unroll=16 + proven hyperparameters):** + +| Env | p10 strength | p10 final_str | Old best str | Target | Verdict | +|---|---|---|---|---|---| +| CartpoleSwingup | 342 | 443 | 443 | 800 | Same | +| FingerTurnEasy | 529 | 685 | 571 | 950 | Better final, worse strength | +| FingerSpin | 402 | 597 | 561 | 600 | Better final (near target!), worse strength | +| FingerTurnHard | 368 | 559 | 484 | 950 | Better final, worse strength | +| SwimmerSwimmer6 | 251 | 384 | 485 | 560 | Worse | +| CartpoleSwingupSparse | 56 | 158 | 271 | 425 | MUCH worse | +| AcrobotSwingup | 31 | 63 | 173 | 220 | MUCH worse | + +**CONCLUSION**: Multi-unroll adds no benefit over single-unroll for any env by `strength` metric. +The `final_strength` improvements for Finger tasks are offset by `strength` regressions. +Root cause: stale old_net (480 vs 30 steps between copies) makes policy ratio less accurate. +**Spec reverted to single-unroll (num_unrolls=1)**. Multi-unroll code preserved in ppo.py. + +**Wave E results (multi-unroll + Brax hyperparameters — ALL worse):** + +Brax-matched spec (clip_eps=0.3, constant LR, 5-layer value, reward_scale=10, minibatch=30720) +hurt every env except HopperStand (which used wrong spec before). Reverted. + +**Wave C completed results** (all reward_scale=10, divide by 10 for true score): + +| Run | Env | strength/10 | final_strength/10 | total_reward_ma/10 | Target | vs Old | +|---|---|---|---|---|---|---| +| p5-ppo7-cartpoleswingup | CartpoleSwingup | 556.6 | 670.5 | 705.3 | 800 | 443→557 ✅ improved | +| p5-ppo7-fingerturneasy | FingerTurnEasy | 511.1 | 693.2 | 687.0 | 950 | 571→511 ❌ **WORSE** | +| p5-ppo7-fingerturnhard | FingerTurnHard | 321.9 | 416.8 | 425.2 | 950 | 484→322 ❌ **WORSE** | +| p5-ppo7-cartpoleswingupsparse2 | CartpoleSwingupSparse | 144.0 | 360.6 | 337.7 | 425 | 271→144 ❌ **WORSE** | + +**KEY FINDING**: time_horizon=480 helps CartpoleSwingup (+25%) but HURTS FingerTurn (-30 to -50%) and CartpoleSwingupSparse (-47%). Long GAE horizons produce noisy advantage estimates for precision/sparse tasks. The official Brax approach is 16×30-step unrolls (short GAE per unroll), NOT 1×480-step unroll. + +--- + +## Spec Changes Applied (2026-03-13) + +### Fix 1: reward_scale=10.0 (matches official mujoco_playground) +- `playground.py`: `PlaygroundVecEnv` now multiplies rewards by `self._reward_scale` +- `__init__.py`: threads `reward_scale` from env spec to wrapper +- `ppo_playground.yaml`: `reward_scale: 10.0` in shared `_env` anchor + +### Fix 2: Revert minibatch_size 2048→4096 (fixes CartpoleSwingup regression) +- `ppo_playground.yaml`: all DM Control specs (ppo_playground, fingerspin, pendulum) now use minibatch_size=4096 +- 15 minibatches × 16 epochs = 240 grad steps (was 30×16=480) +- Restores p5-ppo5 performance for CartpoleSwingup (803 vs 443) + +### Fix 3: Brax-matched spec (commit 6eb08fe9) — time_horizon=480, clip_eps=0.3, constant LR, 5-layer value net +- Increased time_horizon from 30→480 to match total data per update (983K transitions) +- clip_eps 0.2→0.3, constant LR (min_factor=1.0), 5-layer [256×5] value net +- action std upper bound raised (max=2.0 in policy_util.py) +- **Result**: CartpoleSwingup improved (443→557 strength), but FingerTurn and CartpoleSwingupSparse got WORSE +- **Root cause**: 1×480-step unroll computes GAE over 480 steps (noisy), vs official 16×30-step unrolls (short, accurate GAE) + +### Fix 4: ppo_playground_short variant (time_horizon=30 + Brax improvements) +- Keeps: reward_scale=10, clip_eps=0.3, constant LR, 5-layer value net, no grad clipping +- Reverts: time_horizon=30, minibatch_size=4096 (15 minibatches, 240 grad steps) +- **Hypothesis**: Short GAE + other Brax improvements = best of both worlds for precision tasks +- Testing on FingerTurnEasy/Hard first (Wave D p5-ppo8-*) + +### Fix 5: Multi-unroll collection (IMPLEMENTED but NOT USED — code stays, spec reverted) +- Added `num_unrolls` parameter to PPO (ppo.py, actor_critic.py). Code works correctly. +- **Brax-matched spec (Wave E, p5-ppo9)**: clip_eps=0.3, constant LR, 5-layer value, reward_scale=10 + - Result: WORSE on 5/7 tested envs. Only CartpoleSwingup improved (443→506). + - Root cause: minibatch_size=30720 → 7.5x fewer gradient steps per transition → underfitting +- **Reverted spec + multi-unroll (Wave F, p5-ppo10)**: clip_eps=0.2, LR decay, 3-layer value, minibatch=4096 + - Result: Same or WORSE on all envs by `strength` metric. Same fps as single-unroll. + - Training compute per env step is identical, but old_net staleness (480 vs 30 steps) hurts. +- **Conclusion**: Multi-unroll adds complexity without benefit. Reverted spec to single-unroll (num_unrolls=1). + Code preserved in ppo.py (defaults to 1). Spec uses original hyperparameters. + +--- + +## Completed Runs Needing Intake + +### Humanoid (ppo_playground_loco, post log_std fix) — intake immediately + +| Run | HF Folder | strength | target | HF status | +|---|---|---|---|---| +| p5-ppo6-humanoidrun | ppo_playground_loco_humanoidrun_2026_03_12_175917 | 2.78 | 130 | ✅ uploaded | +| p5-ppo6-humanoidwalk | ppo_playground_loco_humanoidwalk_2026_03_12_175817 | 6.82 | 500 | ✅ uploaded | +| p5-ppo6-humanoidstand | ppo_playground_loco_humanoidstand_2026_03_12_175810 | 12.45 | 700 | ❌ **UPLOAD FAILED (412)** — re-upload first | + +Re-upload HumanoidStand: +```bash +source .env && huggingface-cli upload SLM-Lab/benchmark-dev \ + hf_data/data/benchmark-dev/data/ppo_playground_loco_humanoidstand_2026_03_12_175810 \ + data/ppo_playground_loco_humanoidstand_2026_03_12_175810 --repo-type dataset +``` + +**Conclusion**: loco spec still fails completely for Humanoid — log_std fix insufficient. See spec fixes below. + +### BENCHMARKS.md correction needed (commit b6ef49d9 used wrong metric) + +intake-a used `total_reward_ma` instead of `strength`. Fix these 4 entries: + +| Env | Run | strength (correct) | total_reward_ma (wrong) | target | +|---|---|---|---|---| +| AcrobotSwingup | p5-ppo6-acrobotswingup2 | **172.8** | 253.24 | 220 | +| CartpoleBalanceSparse | p5-ppo6-cartpolebalancesparse2 | **545.1** | 991.81 | 700 | +| CartpoleSwingup | p5-ppo6-cartpoleswingup2 | **unknown — extract from logs** | 641.51 | 800 | +| CartpoleSwingupSparse | p5-ppo6-cartpoleswingupsparse | **270.9** | 331.23 | 425 | + +Extract correct values: `dstack logs p5-ppo6-NAME --since 6h 2>&1 | grep "trial_metrics" | tail -1` → use `strength:` field. + +Also check FingerSpin: `dstack logs p5-ppo6-fingerspin2 --since 6h | grep trial_metrics | tail -1` — confirm strength value. + +**Metric decision needed**: strength penalizes slow learners (CartpoleBalanceSparse strength=545 but final MA=992). Consider switching ALL entries to `final_strength`. But this requires auditing every existing entry — do it as a batch before publishing. + +--- + +## Queue (launch when slots open, all 100M) + +| Priority | Env | Spec | Run name | Rationale | +|---|---|---|---|---| +| 1 | PendulumSwingup | ppo_playground_pendulum | p5-ppo6-pendulumswingup | action_repeat=4 + training_epoch=4 (code fix applied) | +| 2 | FingerSpin | ppo_playground_fingerspin | p5-ppo6-fingerspin3 | canonical gamma=0.95 run; fingerspin2 used gamma=0.995 (override silently ignored) | + +--- + +## Full Env Status + +### ✅ Complete (13/25) +| Env | strength | target | normalize_v_targets | +|---|---|---|---| +| CartpoleBalance | 968.23 | 950 | true | +| AcrobotSwingupSparse | 42.74 | 15 | true | +| BallInCup | 942.44 | 680 | true | +| CheetahRun | 865.83 | 850 | true | +| ReacherEasy | 955.08 | 950 | true | +| ReacherHard | 946.99 | 950 | true | +| WalkerRun | 637.80 | 560 | true | +| WalkerStand | 970.94 | 1000 | true | +| WalkerWalk | 952 | 960 | true | +| HopperHop | 22.00 | ~2 | true | +| HopperStand | 118.2 | ~70 | true | +| PendulumSwingup | 533.5 | 395 | **false** | +| FingerSpin | 652.4 | 600 | **false** | + +### ⚠️ Below target (9/25) +| Env | best strength | target | best with | status | +|---|---|---|---|---| +| CartpoleSwingup | 576.1 | 800 | false | Improved +30% from 443 (true) | +| CartpoleBalanceSparse | 545 | 700 | true | Testing false (p5-ppo11) | +| CartpoleSwingupSparse | 296.3 | 425 | false | Improved +9% from 271 (true) | +| AcrobotSwingup | 173 | 220 | true | false=105, regressed | +| FingerTurnEasy | 571 | 950 | true | false=441, regressed | +| FingerTurnHard | 484 | 950 | true | false=336, regressed | +| FishSwim | 463 | 650 | true | Testing false (p5-ppo11) | +| SwimmerSwimmer6 | 509.3 | 560 | true | final_strength=560.6 (at target!) | +| PointMass | 863 | 900 | true | false=854, ~same | + +### ❌ Fundamental failure — Humanoid (3/25) +| Env | best strength | target | diagnosis | +|---|---|---|---| +| HumanoidRun | 3.19 | 130 | <3% target, NormalTanh distribution needed | +| HumanoidWalk | 7.68 | 500 | <2% target, wider policy (2×256) didn't help | +| HumanoidStand | 18.36 | 700 | <3% target, constant LR + wider policy tested, no improvement | + +**Humanoid tested and failed**: wider 2×256 policy + constant LR + vnorm (Wave IV-H). MA stayed flat at 8-10 for HumanoidStand over entire training. Root cause is likely NormalTanh distribution (state-dependent std + tanh squashing) — a fundamental architectural difference from Brax. + +--- + +## Spec Fixes Required + +### Priority 1: Humanoid loco spec (update ppo_playground_loco) + +Official uses `num_envs=8192, time_horizon=20 (unroll_length)` for loco. We use `num_envs=2048, time_horizon=64`. + +**Proposed update to ppo_playground_loco**: +```yaml +ppo_playground_loco: + agent: + algorithm: + gamma: 0.97 + time_horizon: 20 # was 64; official unroll_length=20 + training_epoch: 4 + env: + num_envs: 8192 # was 2048; official loco num_envs=8192 +``` + +**Before launching**: verify VRAM by checking if 8192 envs fits A4500 20GB. Run one Humanoid env, check `dstack logs NAME --since 10m | grep -i "memory\|OOM"` after 5 min. + +**Rerun only**: HumanoidRun, HumanoidWalk, HumanoidStand (3 runs). HopperStand also uses loco spec — add if VRAM confirmed OK. + +### Priority 2: CartpoleSwingup regression + +p5-ppo5 scored 803 ✅; p5-ppo6 scored ~641. The p5-ppo6 change was `minibatch_size: 2048` (30 minibatches) vs p5-ppo5's 4096 (15 minibatches). More gradient steps per iter hurt CartpoleSwingup. + +**Option A**: Revert `ppo_playground` minibatch_size from 2048→4096 (15 minibatches). Rerun only failing DM Control envs (CartpoleSwingup, CartpoleSwingupSparse, + any that need it). + +**Option B**: Accept 641 and note the trade-off — p5-ppo6 improved other envs (CartpoleBalance 968 was already ✅). + +### Priority 3: FingerTurnEasy/Hard + +No official override. At 570/? vs target 950, gap is large. Check: +```bash +grep -A10 "Finger" ~/.cache/uv/archive-v0/ON8dY3irQZTYI3Bok0SlC/mujoco_playground/config/dm_control_suite_params.py +``` + +May need deeper policy network [32,32,32,32] (official arch) vs our [64,64]. + +--- + +## Tuning Principles Learned + +1. **Check official per-env overrides first**: `dm_control_suite_params.py` has `discounting`, `action_repeat`, `num_updates_per_batch` per env. These are canonical. + +2. **action_repeat** is env-level, not spec-level. Implemented in `playground.py` via `_ACTION_REPEAT` dict. PendulumSwingup→4. Add others as found. + +3. **NaN loss**: `log_std` clamp max=0.5 helps but Humanoid (21 DOF) still has many NaN skips. Rate-limited to log every 10K. If NaN dominates → spec is wrong. + +4. **num_envs scales with task complexity**: Cartpole/Acrobot: 2048 fine. Humanoid locomotion: needs 8192 for rollout diversity. + +5. **time_horizon (unroll_length)**: DM Control official=30, loco official=20. Longer → more correlated rollouts → less diversity per update. Match official. + +6. **Minibatch count**: more minibatches = more gradient steps per batch. Can overfit or slow convergence for simpler envs. 15 minibatches (p5-ppo5) vs 30 (p5-ppo6) — the latter hurt CartpoleSwingup. + +7. **Sparse reward + strength metric**: strength (trajectory mean) severely penalizes sparse/delayed convergence. CartpoleBalanceSparse strength=545 but final MA=992. Resolve metric before publishing. + +8. **High seed variance** (consistency < 0): some seeds solve, some don't → wrong spec, not bad luck. Fix exploration (entropy_coef) or use different spec. + +9. **-s overrides are silently ignored** if the YAML key isn't a `${variable}` placeholder. Always verify overrides took effect via logs: `grep "gamma\|lr\|training_epoch" dstack logs`. + +10. **Loco spec failures**: if loco spec gives <20 on env with target >100, the issue is almost certainly num_envs/time_horizon mismatch vs official, not a fundamental algo failure. + +--- + +## Code Changes This Session + +| Commit | Change | +|---|---| +| `8fe7bc76` | `playground.py`: `_ACTION_REPEAT` lookup for per-env action_repeat. `ppo_playground.yaml`: added `ppo_playground_fingerspin` and `ppo_playground_pendulum` specs. | +| `fb55c2f9` | `base.py`: rate-limit NaN loss warning (every 10K skips). `ppo_playground.yaml`: revert log_frequency 1M→100K. | +| `3f4ede3d` | BENCHMARKS.md: mark HopperHop ✅. | + +--- + +## Resume Commands + +```bash +# Setup +git pull && uv sync --no-default-groups + +# Check jobs +dstack ps + +# Intake a completed run +dstack logs RUN_NAME --since 6h 2>&1 | grep "trial_metrics" | tail -1 +dstack logs RUN_NAME --since 6h 2>&1 | grep -iE "Uploading|benchmark-dev" + +# Pull HF data +source .env && huggingface-cli download SLM-Lab/benchmark-dev \ + --local-dir hf_data/data/benchmark-dev --repo-type dataset \ + --include "data/FOLDER_NAME/*" + +# Plot +uv run slm-lab plot -t "EnvName" -d hf_data/data/benchmark-dev/data -f FOLDER_NAME + +# Launch PendulumSwingup (queue priority 1) +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_pendulum train \ + -s env=playground/PendulumSwingup -s max_frame=100000000 -n p5-ppo6-pendulumswingup + +# Launch FingerSpin canonical (queue priority 2) +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_fingerspin train \ + -s env=playground/FingerSpin -s max_frame=100000000 -n p5-ppo6-fingerspin3 + +# Launch Humanoid loco (after updating ppo_playground_loco spec to num_envs=8192, time_horizon=20) +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground_loco train \ + -s env=playground/HumanoidRun -s max_frame=100000000 -n p5-ppo6-humanoidrun2 +``` + +--- + +## CRITICAL CORRECTION (2026-03-13) — Humanoid is DM Control, not Loco + +**Root cause of Humanoid failure**: HumanoidRun/Walk/Stand are registered in `dm_control_suite/__init__.py` — they ARE DM Control envs. We incorrectly ran them with `ppo_playground_loco` (gamma=0.97, 4 epochs, time_horizon=64). + +Official config uses DEFAULT DM Control params for them: discounting=0.995, 2048 envs, lr=1e-3, unroll_length=30, 16 epochs. + +**NaN was never the root cause** — intake-b confirmed NaN skips were 0, 0, 2 in the loco runs. The spec was simply wrong. + +**Fix**: Run all 3 Humanoid envs with `ppo_playground` (DM Control spec). No spec change needed. + +```bash +# Launch with correct spec +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \ + -s env=playground/HumanoidRun -s max_frame=100000000 -n p5-ppo6-humanoidrun2 + +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \ + -s env=playground/HumanoidWalk -s max_frame=100000000 -n p5-ppo6-humanoidwalk2 + +source .env && uv run slm-lab run-remote --gpu \ + slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml ppo_playground train \ + -s env=playground/HumanoidStand -s max_frame=100000000 -n p5-ppo6-humanoidstand2 +``` + +**HopperStand**: Also a DM Control env. If p5-ppo6-hopperstand (loco spec, 16.38) is below target, rerun with `ppo_playground`. + +**Do NOT intake** the loco-spec Humanoid runs (2.78/6.82/12.45) — wrong spec, not valid benchmark results. The old ppo_playground runs (2.86/3.73) were also wrong spec but at least the right family. + +**Updated queue (prepend these as highest priority)**: + +| Priority | Env | Spec | Run name | +|---|---|---|---| +| 0 | HumanoidRun | ppo_playground | p5-ppo6-humanoidrun2 | +| 0 | HumanoidWalk | ppo_playground | p5-ppo6-humanoidwalk2 | +| 0 | HumanoidStand | ppo_playground | p5-ppo6-humanoidstand2 | +| 0 | HopperStand | ppo_playground | p5-ppo6-hopperstand2 (if loco result ⚠️) | + +Note on loco spec (`ppo_playground_loco`): only for actual locomotion robot envs (Go1, G1, BerkeleyHumanoid, etc.) — NOT for DM Control Humanoid. + +--- + +## METRIC CORRECTION (2026-03-13) — strength vs final_strength + +**Problem**: `strength` = trajectory-averaged mean over entire run. For slow-rising envs this severely underrepresents end-of-training performance. After metric correction to `strength`: + +| Env | strength | total_reward_ma | target | conclusion | +|---|---|---|---|---| +| CartpoleSwingup | **443.0** | 641.51 | 800 | Massive regression from p5-ppo5 (803). Strength 443 << 665 (65M result) — curve rises but slow start drags average down | +| CartpoleBalanceSparse | **545.1** | 991.81 | 700 | Hits target by end (final MA=992) but sparse reward delays convergence | +| AcrobotSwingup | **172.8** | 253.24 | 220 | Below target by strength, above by final MA | +| CartpoleSwingupSparse | **270.9** | 331.23 | 425 | Below both metrics | + +**Resolution needed**: Reference scores from mujoco_playground are end-of-training values, not trajectory averages. `final_strength` (= last eval MA) is the correct comparison metric. **Recommend switching BENCHMARKS.md score column to `final_strength`** and audit all existing entries. + +**CartpoleSwingup regression** is real regardless of metric: p5-ppo5 `final_strength` would be ~800+, p5-ppo6 `total_reward_ma`=641. The p5-ppo6 minibatch change (2048→30 minibatches) hurt CartpoleSwingup convergence speed. Fix: revert `ppo_playground` minibatch_size to 4096 (15 minibatches) — OR accept and investigate if CartpoleSwingup needs its own spec variant. + +--- + +## Next Architectural Changes + +Research-based prioritized list of changes NOT yet tested. Ordered by expected impact across the most envs. Wave I (5-layer value + no grad clip) is currently running — results pending. + +### Priority 1: NormalTanhDistribution (tanh-squashed actions) + +**Expected impact**: HIGH — affects FingerTurnEasy/Hard, FishSwim, Humanoid, CartpoleSwingup +**Implementation complexity**: MEDIUM (new distribution class + policy_util changes) +**Envs helped**: All continuous-action envs, especially precision/manipulation tasks + +**What Brax does differently**: Brax uses `NormalTanhDistribution` — samples from `Normal(loc, scale)`, then applies `tanh` to bound actions to [-1, 1]. The log-probability includes a log-det-jacobian correction: `log_prob -= log(1 - tanh(x)^2)`. The scale is parameterized as `softplus(raw_scale) + 0.001` (state-dependent, output by the network). + +**What SLM-Lab does**: Raw `Normal(loc, scale)` with state-independent `log_std` as an `nn.Parameter`. Actions can exceed [-1, 1] and are silently clipped by the environment. The log-prob does NOT account for this clipping, creating a mismatch between the distribution the agent thinks it's using and the effective action distribution. + +**Why this matters**: +1. **Gradient quality**: Without jacobian correction, the policy gradient is biased. Actions near the boundary (common in precise manipulation like FingerTurn) have incorrect log-prob gradients. The agent cannot learn fine boundary control. +2. **Exploration**: State-dependent std allows the agent to be precise where it's confident and exploratory where uncertain. State-independent std forces uniform exploration across all states — wasteful for tasks requiring both coarse and fine control. +3. **FingerTurn gap (571/950 = 60%)**: FingerTurn requires precise angular positioning of a fingertip. Without tanh squashing, actions at the boundary are clipped but the log-prob doesn't reflect this — the policy "thinks" it's outputting different actions that are actually identical after clipping. This prevents learning fine-grained control near action limits. +4. **Humanoid gap (<3%)**: 21 DOF with high-dimensional action space. State-independent std means all joints explore equally. Humanoid needs to stabilize torso (low variance) while exploring leg movement (high variance) — impossible with shared std. + +**Implementation plan**: +1. Add `NormalTanhDistribution` class in `slm_lab/lib/distribution.py`: + - Forward: `action = tanh(Normal(loc, scale).rsample())` + - log_prob: `Normal.log_prob(atanh(action)) - log(1 - action^2 + eps)` + - entropy: approximate (no closed form for tanh-Normal) +2. Modify `policy_util.init_action_pd()` to handle the new distribution +3. Remove `log_std_init` for playground specs — let the network output both mean and std (state-dependent) +4. Network change: policy output dim doubles (mean + raw_scale per action dim) + +**Risk**: Medium. Tanh squashing changes gradient dynamics significantly. Need to validate on already-solved envs (CartpoleBalance, WalkerRun) to ensure no regression. Can gate behind a spec flag (`action_pdtype: NormalTanh`). + +--- + +### Fix 6: Constant LR variants + Humanoid variant (commit pending) + +Added three new spec variants to `ppo_playground.yaml`: +- `ppo_playground_constlr`: DM Control + constant LR + minibatch_size=4096. For envs where vnorm=false works. +- `ppo_playground_vnorm_constlr`: DM Control + vnorm + constant LR + minibatch_size=2048. For precision envs. +- `ppo_playground_humanoid`: 2×256 policy + constant LR + vnorm. For Humanoid DM Control envs. + +--- + +### Priority 2: Constant LR (remove LinearToMin decay) + +**Expected impact**: MEDIUM — affects all envs, especially long-training ones +**Implementation complexity**: TRIVIAL (spec-only change) +**Envs helped**: CartpoleSwingup, CartpoleSwingupSparse, FingerTurnEasy/Hard, FishSwim + +**What Brax does**: Constant LR = 1e-3 for all DM Control envs. No decay. + +**What SLM-Lab does**: `LinearToMin` decay from 1e-3 to 3.3e-5 (min_factor=0.033) over the full training run. + +**Why this matters**: By the midpoint of training, SLM-Lab's LR is already at ~5e-4 — half the Brax LR. By 75% of training, it's at ~2.7e-4. For envs that converge late (CartpoleSwingup, FishSwim), the LR is too low during the critical learning phase. Brax maintains full learning capacity throughout. + +**This was tested as part of the Brax hyperparameter bundle (Wave E) which was ALL worse**, but that test changed 4 things simultaneously (clip_eps=0.3 + constant LR + 5-layer value + reward_scale=10). The constant LR was never tested in isolation. + +**Implementation**: Set `min_factor: 1.0` in spec (or remove `lr_scheduler_spec` entirely). + +**Risk**: Low. Constant LR is the Brax default and widely used. If instability occurs late in training, a gentler decay (`min_factor: 0.3`) can be used as fallback. + +--- + +### Priority 3: Clip epsilon 0.3 (from 0.2) + +**Expected impact**: MEDIUM — affects all envs +**Implementation complexity**: TRIVIAL (spec-only change) +**Envs helped**: FingerTurnEasy/Hard, FishSwim, CartpoleSwingup (tasks needing faster policy adaptation) + +**What Brax does**: `clipping_epsilon=0.3` for DM Control. + +**What SLM-Lab does**: `clip_eps=0.2`. + +**Why this matters**: Clip epsilon 0.2 constrains the policy ratio to [0.8, 1.2]. At 0.3, it's [0.7, 1.3] — allowing 50% larger policy updates per step. For envs that need to explore widely before converging (FingerTurn, FishSwim), the tighter constraint slows learning. + +**This was tested in the Brax bundle (Wave E) alongside 3 other changes — all worse together.** Never tested in isolation or with just constant LR. + +**Implementation**: Change `start_val: 0.2` to `start_val: 0.3` in `clip_eps_spec`. + +**Risk**: Low-medium. Larger clip_eps can cause training instability with small batches. However, with our 61K batch (2048 envs * 30 steps), it should be safe. If combined with constant LR (#2), the compounding effect should be tested carefully. + +--- + +### Priority 4: Per-env tuning for FingerTurn (if P1-P3 insufficient) + +**Expected impact**: HIGH for FingerTurn specifically +**Implementation complexity**: LOW (spec variant) +**Envs helped**: FingerTurnEasy, FingerTurnHard only + +If NormalTanh + constant LR + clip_eps=0.3 don't close the FingerTurn gap (currently 60% and 51% of target), try: + +1. **Lower gamma (0.99 → 0.95)**: FingerSpin uses gamma=0.95 officially. FingerTurn may benefit from shorter horizon discounting since reward is instantaneous (current angle vs target). Lower gamma reduces value function complexity. + +2. **Smaller policy network**: Brax DM Control uses `(32, 32, 32, 32)` — our `(64, 64)` may over-parameterize for manipulation tasks. Try `(32, 32, 32, 32)` to match exactly. + +3. **Higher entropy coefficient**: FingerTurn has a narrow solution manifold. Increasing entropy from 0.01 to 0.02 would encourage broader exploration of finger positions. + +--- + +### Priority 5: Humanoid-specific — num_envs=8192 + +**Expected impact**: HIGH for Humanoid specifically +**Implementation complexity**: TRIVIAL (spec-only) +**Envs helped**: HumanoidStand, HumanoidWalk, HumanoidRun + +**Current situation**: Humanoid was incorrectly run with loco spec (gamma=0.97, 4 epochs). The correction to DM Control spec (gamma=0.995, 16 epochs) is being tested in Wave I (p5-ppo13). However, even with correct spec, the standard 2048 envs may be insufficient. + +**Why num_envs matters for Humanoid**: 21 DOF, 67-dim observations. With 2048 envs and time_horizon=30, the batch is 61K transitions — each containing a narrow slice of the 21-DOF state space. Humanoid needs more diverse rollouts to learn coordinated multi-joint control. Brax's effective batch of 983K transitions provides 16x more state-space coverage per update. + +**Since we can't easily get 16x more data per update**, increasing num_envs from 2048 to 4096 or 8192 doubles/quadruples rollout diversity. Combined with NormalTanh (state-dependent std for per-joint exploration), this could be sufficient. + +**VRAM concern**: 8192 envs may exceed A4500 20GB. Test with a quick 1M frame run first. Fallback: 4096 envs. + +--- + +### NOT recommended (already tested, no benefit) + +| Change | Wave | Result | Why it failed | +|---|---|---|---| +| normalize_v_targets: false | G/G2 | Mixed (helps 5, hurts 6) | Already per-env split in spec | +| Multi-unroll (num_unrolls=16) | F | Same or worse by strength | Stale old_net (480 vs 30 steps between copies) | +| Brax hyperparameter bundle (clip_eps=0.3 + constant LR + 5-layer value + reward_scale=10) | E | All worse | Confounded — 4 changes at once. Individual effects unknown except for reward_scale (helps) | +| time_horizon=480 (single long unroll) | C | Helps CartpoleSwingup, hurts FingerTurn | 480-step GAE is noisy for precision tasks | +| 5-layer value + no grad clip | III | Only helped CartpoleSwingup slightly | Hurt AcrobotSwingup, FishSwim; not general | +| NormalTanh distribution | II | Abandoned | Architecturally incompatible — SLM-Lab stores post-tanh actions, atanh inversion unstable | +| vnorm=true rerun (reverted spec) | IV | All worse or same | No new information — variance rerun | +| 4×32 Brax policy + constant LR + vnorm | VI | All worse | FingerTurnEasy 408 (vs 571), FingerTurnHard 244 (vs 484), FishSwim 106 (vs 463) | +| Humanoid wider 2×256 + constant LR + vnorm | IV-H | No improvement | MA flat at 8-10 for all 3 Humanoid envs; NormalTanh is root cause | + +### Currently testing + +### Wave V-B completed results (constant LR) + +| Env | strength | final_strength | Old best | Verdict | +|---|---|---|---|---| +| PointMass | 841.3 | 877.3 | 863.5 | ❌ strength lower | +| **SwimmerSwimmer6** | **517.3** | 585.7 | 509.3 | ✅ NEW BEST (+1.6%) | +| FishSwim | 434.6 | 550.8 | 463.0 | ❌ strength lower (final much better) | + +### Wave VII completed results (clip_eps=0.3 + constant LR) + +| Env | strength | final_strength | Old best | Verdict | +|---|---|---|---|---| +| FingerTurnEasy | 518.0 | 608.8 | 570.9 | ❌ strength lower (final much better, but slow start drags average) | +| FingerTurnHard | 401.7 | 489.7 | 484.1 | ❌ strength lower (same pattern) | +| **FishSwim** | **476.9** | 581.4 | 463.0 | ✅ NEW BEST (+3%) | + +**Key insight**: clip_eps=0.3 produces higher final performance but worse trajectory-averaged strength. The wider clip allows bigger policy updates which increases exploration early (slower convergence) but reaches higher asymptotic performance. The strength metric penalizes late bloomers. + +### Wave V completed results + +| Env | strength | final_strength | Old best | Verdict | +|---|---|---|---|---| +| CartpoleSwingup | **606.5** | 702.6 | 576.1 | ✅ NEW BEST (+5%) | +| CartpoleSwingupSparse | **383.7** | 536.2 | 296.3 | ✅ NEW BEST (+29%) | +| CartpoleBalanceSparse | **757.9** | 993.0 | 690.4 | ✅ NEW BEST (+10%) | +| AcrobotSwingup | 161.2 | 246.9 | 172.8 | ❌ strength lower (final_strength much better but trajectory avg worse due to slow start) | + +**Key insight**: Constant LR is the single most impactful change found. LR decay from 1e-3 to 3.3e-5 was hurting late-converging envs. CartpoleBalanceSparse went from 690→993 (final_strength), effectively solved. + +### Completed waves + +**Wave VI** (p5-ppo18): 4×32 Brax policy — **STOPPED, all underperformed**. FingerTurnEasy MA 408, FingerTurnHard MA 244, FishSwim MA 106. All below old bests. + +**Wave IV-H** (p5-ppo16h): Humanoid wider 2×256 + constant LR + vnorm — all flat at MA 8-10. + +### Next steps after Wave VII + +1. **Humanoid num_envs=4096/8192** — only major gap remaining after Wave VII +2. **Consider constant LR + clip_eps=0.3 as new general default** if results hold across envs + +### Key Brax architecture differences (from source code analysis) + +| Parameter | Brax Default | SLM-Lab | Impact | +|---|---|---|---| +| Policy | 4×32 (deeper, narrower) | 2×64 | **Testable via spec** | +| Value | 5×256 | 3×256 | Tested Wave III — no help | +| Distribution | tanh_normal | Normal | **Cannot test** (architectural incompatibility) | +| Init | lecun_uniform | orthogonal_ | Would need code change | +| State-dep std | False (scalar) | False (nn.Parameter) | Similar | +| Activation | swish (SiLU) | SiLU | ✅ Match | +| clipping_epsilon | 0.3 | 0.2 | **Testable via spec** | +| num_minibatches | 32 | 15-30 | Close enough | +| num_unrolls | 16 (implicit) | 1 | Tested Wave F — stale old_net hurts | diff --git a/docs/phase5_brax_comparison.md b/docs/phase5_brax_comparison.md new file mode 100644 index 0000000000000000000000000000000000000000..9fbb883a28887e68cc0b91aa72f004440a7ca756 --- /dev/null +++ b/docs/phase5_brax_comparison.md @@ -0,0 +1,446 @@ +# Phase 5: Brax PPO vs SLM-Lab PPO — Comprehensive Comparison + +Source: `google/brax` (latest `main`) and `google-deepmind/mujoco_playground` (latest `main`). +All values extracted from actual code, not documentation. + +--- + +## 1. Batch Collection Mechanics + +### Brax +The training loop in `brax/training/agents/ppo/train.py` (line 586–591) collects data via nested `jax.lax.scan`: + +```python +(state, _), data = jax.lax.scan( + f, (state, key_generate_unroll), (), + length=batch_size * num_minibatches // num_envs, +) +``` + +Each inner call does `generate_unroll(env, state, policy, key, unroll_length)` — a `jax.lax.scan` of `unroll_length` sequential env steps. The outer scan repeats this `batch_size * num_minibatches // num_envs` times **sequentially**, rolling the env state forward continuously. + +**DM Control default**: `num_envs=2048`, `batch_size=1024`, `num_minibatches=32`, `unroll_length=30`. +- Outer scan length = `1024 * 32 / 2048 = 16` sequential unrolls. +- Each unroll = 30 steps. +- Total data per training step = 16 * 2048 * 30 = **983,040 transitions** reshaped to `(32768, 30)`. +- Then `num_updates_per_batch=16` SGD passes, each splitting into 32 minibatches. +- **Effective gradient steps per collect**: 16 * 32 = 512. + +### SLM-Lab +`time_horizon=30`, `num_envs=2048` → collects `30 * 2048 = 61,440` transitions. +`training_epoch=16`, `minibatch_size=4096` → 15 minibatches per epoch → 16 * 15 = 240 gradient steps. + +### Difference +**Brax collects 16x more data per training step** by doing 16 sequential unrolls before updating. SLM-Lab does 1 unroll. This means Brax's advantages are computed over much longer trajectories (480 steps vs 30 steps), providing much better value bootstrap targets. + +Brax also shuffles the entire 983K-transition dataset into minibatches, enabling better gradient estimates. + +**Classification: CRITICAL** + +**Fix**: Increase `time_horizon` or implement multi-unroll collection. The simplest fix: increase `time_horizon` from 30 to 480 (= 30 * 16). This gives the same data-per-update ratio. However, this would require more memory. Alternative: keep `time_horizon=30` but change `training_epoch` to 1 and let the loop collect multiple horizons before training — requires architectural changes. + +**Simplest spec-only fix**: Set `time_horizon=480` (or even 256 as a compromise). This is safe because GAE with `lam=0.95` naturally discounts old data. Risk: memory usage increases 16x for the batch buffer. + +--- + +## 2. Reward Scaling + +### Brax +`reward_scaling` is applied **inside the loss function** (`losses.py` line 212): +```python +rewards = data.reward * reward_scaling +``` +This scales rewards just before GAE computation. It does NOT modify the environment rewards. + +**DM Control default**: `reward_scaling=10.0` +**Locomotion default**: `reward_scaling=1.0` +**Manipulation default**: `reward_scaling=1.0` (except PandaPickCubeCartesian: 0.1) + +### SLM-Lab +`reward_scale` is applied in the **environment wrapper** (`playground.py` line 149): +```python +rewards = np.asarray(self._state.reward) * self._reward_scale +``` + +**Current spec**: `reward_scale: 10.0` (DM Control) + +### Difference +Functionally equivalent — both multiply rewards by a constant before GAE. The location (env vs loss) shouldn't matter for PPO since rewards are only used in GAE computation. + +**Classification: MINOR** — Already matching for DM Control. + +--- + +## 3. Observation Normalization + +### Brax +Uses Welford's online algorithm to track per-feature running mean/std. Applied via `running_statistics.normalize()`: +```python +data = (data - mean) / std +``` +Mean-centered AND divided by std. Updated **every training step** before SGD (line 614). +`normalize_observations=True` for all environments. +`std_eps=0.0` (default, no epsilon in std). + +### SLM-Lab +Uses gymnasium's `VectorNormalizeObservation` (CPU) or `TorchNormalizeObservation` (GPU), which also uses Welford's algorithm with mean-centering and std division. + +**Current spec**: `normalize_obs: true` + +### Difference +Both use mean-centered running normalization. Brax updates normalizer params inside the training loop (not during rollout), while SLM-Lab updates during rollout (gymnasium wrapper). This is a subtle timing difference but functionally equivalent. + +Brax uses `std_eps=0.0` by default, while gymnasium uses `epsilon=1e-8`. Minor numerical difference. + +**Classification: MINOR** — Already matching. + +--- + +## 4. Value Function + +### Brax +- **Loss**: Unclipped MSE by default (`losses.py` line 252–263): + ```python + v_error = vs - baseline + v_loss = jnp.mean(v_error * v_error) * 0.5 * vf_coefficient + ``` +- **vf_coefficient**: 0.5 (default in `train.py`) +- **Value clipping**: Only if `clipping_epsilon_value` is set (default `None` = no clipping) +- **No value target normalization** — raw GAE targets +- **Separate policy and value networks** (always separate in Brax's architecture) +- Value network: 5 hidden layers of 256 (DM Control default) with `swish` activation +- **Bootstrap on timeout**: Optional, default `False` + +### SLM-Lab +- **Loss**: MSE with `val_loss_coef=0.5` +- **Value clipping**: Optional via `clip_vloss` (default False) +- **Value target normalization**: Optional via `normalize_v_targets: true` using `ReturnNormalizer` +- **Architecture**: `[256, 256, 256]` with SiLU (3 layers vs Brax's 5) + +### Difference +1. **Value network depth**: Brax uses **5 layers of 256** for DM Control, SLM-Lab uses **3 layers of 256**. This is a meaningful capacity difference for the value function, which needs to accurately estimate returns. + +2. **Value target normalization**: SLM-Lab has `normalize_v_targets: true` with a `ReturnNormalizer`. Brax does NOT normalize value targets. This could cause issues if the normalizer is poorly calibrated. + +3. **Value network architecture (Loco)**: Brax uses `[256, 256, 256, 256, 256]` for loco too. + +**Classification: IMPORTANT** + +**Fix**: +- Consider increasing value network to 5 layers: `[256, 256, 256, 256, 256]` to match Brax. +- Consider disabling `normalize_v_targets` since Brax doesn't use it and `reward_scaling=10.0` already provides good gradient magnitudes. +- Risk of regressing: the return normalizer may be helping envs with high reward variance. Test with and without. + +--- + +## 5. Advantage Computation (GAE) + +### Brax +`compute_gae` in `losses.py` (line 38–100): +- Standard GAE with `lambda_=0.95`, `discount=0.995` (DM Control) +- Computed over each unroll of `unroll_length` timesteps +- Uses `truncation` mask to handle episode boundaries within an unroll +- `normalize_advantage=True` (default): `advs = (advs - mean) / (std + 1e-8)` over the **entire batch** +- GAE is computed **inside the loss function**, once per SGD pass (recomputed each time with current value estimates? No — computed once with data from rollout, including stored baseline values) + +### SLM-Lab +- GAE computed in `calc_gae_advs_v_targets` using `math_util.calc_gaes` +- Computed once before training epochs +- Advantage normalization: per-minibatch standardization in `calc_policy_loss`: + ```python + advs = math_util.standardize(advs) # per minibatch + ``` + +### Difference +1. **GAE horizon**: Brax computes GAE over 30-step unrolls. SLM-Lab also uses 30-step horizon. **Match**. +2. **Advantage normalization scope**: Brax normalizes over the **entire batch** (983K transitions). SLM-Lab normalizes **per minibatch** (4096 transitions). Per-minibatch normalization has more variance. However, both approaches are standard — SB3 also normalizes per-minibatch. +3. **Truncation handling**: Brax explicitly handles truncation with `truncation_mask` in GAE. SLM-Lab uses `terminateds` from the env wrapper, with truncation handled by gymnasium's auto-reset. These should be functionally equivalent. + +**Classification: MINOR** — Approaches are different but both standard. + +--- + +## 6. Learning Rate Schedule + +### Brax +Default: `learning_rate_schedule=None` → **no schedule** (constant LR). +Optional: `ADAPTIVE_KL` schedule that adjusts LR based on KL divergence. +Base LR: `1e-3` (DM Control), `3e-4` (Locomotion). + +### SLM-Lab +Uses `LinearToMin` scheduler: +```yaml +lr_scheduler_spec: + name: LinearToMin + frame: "${max_frame}" + min_factor: 0.033 +``` +This linearly decays LR from `1e-3` to `1e-3 * 0.033 = 3.3e-5` over training. + +### Difference +**Brax uses constant LR. SLM-Lab decays LR by 30x over training.** This is a significant difference. Linear LR decay can help convergence in the final phase but can also hurt by reducing the LR too early for long training runs. + +**Classification: IMPORTANT** + +**Fix**: Consider removing or weakening the LR decay for playground envs: +- Option A: Set `min_factor: 1.0` (effectively constant LR) to match Brax +- Option B: Use a much gentler decay, e.g. `min_factor: 0.1` (10x instead of 30x) +- Risk: Some envs may benefit from the decay. Test both. + +--- + +## 7. Entropy Coefficient + +### Brax +**Fixed** (no decay): +- DM Control: `entropy_cost=1e-2` +- Locomotion: `entropy_cost=1e-2` (some overrides to `5e-3`) +- Manipulation: varies, typically `1e-2` or `2e-2` + +### SLM-Lab +**Fixed** (no_decay): +```yaml +entropy_coef_spec: + name: no_decay + start_val: 0.01 +``` + +### Difference +**Match**: Both use fixed `0.01`. + +**Classification: MINOR** — Already matching. + +--- + +## 8. Gradient Clipping + +### Brax +`max_grad_norm` via `optax.clip_by_global_norm()`: +- DM Control default: **None** (no clipping!) +- Locomotion default: `1.0` +- Vision PPO and some manipulation: `1.0` + +### SLM-Lab +`clip_grad_val: 1.0` — always clips gradients by global norm. + +### Difference +**Brax does NOT clip gradients for DM Control by default.** SLM-Lab always clips at 1.0. + +Gradient clipping can be overly conservative, preventing the optimizer from taking large useful steps when gradients are naturally large (e.g., early training with `reward_scaling=10.0`). + +**Classification: IMPORTANT** — Could explain slow convergence on DM Control envs. + +**Fix**: Remove gradient clipping for DM Control playground spec: +```yaml +clip_grad_val: null # match Brax DM Control default +``` +Keep `clip_grad_val: 1.0` for locomotion spec. Risk: gradient explosions without clipping, but Brax demonstrates it works for DM Control. + +--- + +## 9. Action Distribution + +### Brax +Default: `NormalTanhDistribution` — samples from `Normal(loc, scale)` then applies `tanh` postprocessing. +- `param_size = 2 * action_size` (network outputs both mean and log_scale) +- Scale: `scale = (softplus(raw_scale) + 0.001) * 1.0` (min_std=0.001, var_scale=1) +- **State-dependent std**: The scale is output by the policy network (not a separate parameter) +- Uses `tanh` bijector with log-det-jacobian correction + +### SLM-Lab +Default: `Normal(loc, scale)` without tanh. +- `log_std_init` creates a **state-independent** `nn.Parameter` for log_std +- Scale: `scale = clamp(log_std, -5, 0.5).exp()` → std range [0.0067, 1.648] +- **State-independent std** (when `log_std_init` is set) + +### Difference +1. **Tanh squashing**: Brax applies `tanh` to bound actions to [-1, 1]. SLM-Lab does NOT. This is a fundamental architectural difference: + - With tanh: actions are bounded, log-prob includes jacobian correction + - Without tanh: actions can exceed env bounds, relying on env clipping + +2. **State-dependent vs independent std**: Brax uses state-dependent std (network outputs it), SLM-Lab uses state-independent learnable parameter. + +3. **Std parameterization**: Brax uses `softplus + 0.001` (min_std=0.001), SLM-Lab uses `clamp(log_std, -5, 0.5).exp()` with max std of 1.648. + +4. **Max std cap**: SLM-Lab caps at exp(0.5)=1.648. Brax has no explicit cap (softplus can grow unbounded). However, Brax's `tanh` squashing means even large std doesn't produce out-of-range actions. + +**Classification: IMPORTANT** + +**Note**: For MuJoCo Playground where actions are already in [-1, 1] and the env wrapper has `PlaygroundVecEnv` with action space `Box(-1, 1)`, the `tanh` squashing may not be critical since the env naturally clips. But the log-prob correction matters for policy gradient quality. + +**Fix**: +- The state-independent log_std is a reasonable simplification (CleanRL also uses it). Keep. +- The `max=0.5` clamp may be too restrictive. Consider increasing to `max=2.0` (CleanRL default) or removing the upper clamp entirely. +- Consider implementing tanh squashing as an option for playground envs. + +--- + +## 10. Network Initialization + +### Brax +Default: `lecun_uniform` for all layers (policy and value). +Activation: `swish` (= SiLU). +No special output layer initialization by default. + +### SLM-Lab +Default: `orthogonal_` initialization. +Activation: SiLU (same as swish). + +### Difference +- Brax uses `lecun_uniform`, SLM-Lab uses `orthogonal_`. Both are reasonable for swish/SiLU activations. +- `orthogonal_` tends to preserve gradient magnitudes across layers, which can be beneficial for deeper networks. + +**Classification: MINOR** — Both are standard choices. `orthogonal_` may actually be slightly better for the 3-layer SLM-Lab network. + +--- + +## 11. Network Architecture + +### Brax (DM Control defaults) +- **Policy**: `(32, 32, 32, 32)` — 4 layers of 32, swish activation +- **Value**: `(256, 256, 256, 256, 256)` — 5 layers of 256, swish activation + +### Brax (Locomotion defaults) +- **Policy**: `(128, 128, 128, 128)` — 4 layers of 128 +- **Value**: `(256, 256, 256, 256, 256)` — 5 layers of 256 + +### SLM-Lab (ppo_playground) +- **Policy**: `(64, 64)` — 2 layers of 64, SiLU +- **Value**: `(256, 256, 256)` — 3 layers of 256, SiLU + +### Difference +1. **Policy width**: SLM-Lab uses wider layers (64) but fewer (2 vs 4). Total params: ~similar for DM Control (4*32*32=4096 vs 2*64*64=8192). SLM-Lab's policy is actually larger per layer but shallower. + +2. **Value depth**: 3 vs 5 layers. This is significant — the value function benefits from more depth to accurately represent complex return landscapes, especially for long-horizon tasks. + +3. **DM Control policy**: Brax uses very small 32-wide networks. SLM-Lab's 64-wide may be slightly over-parameterized but shouldn't hurt. + +**Classification: IMPORTANT** (mainly the value network depth) + +**Fix**: Consider increasing value network to 5 layers to match Brax: +```yaml +_value_body: &value_body + modules: + body: + Sequential: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: +``` + +--- + +## 12. Clipping Epsilon + +### Brax +Default: `clipping_epsilon=0.3` (in `train.py` line 206). +DM Control: not overridden → **0.3**. +Locomotion: some envs override to `0.2`. + +### SLM-Lab +Default: `clip_eps=0.2` (in spec). + +### Difference +Brax uses **0.3** while SLM-Lab uses **0.2**. This is notable — 0.3 allows larger policy updates per step, which can accelerate learning but risks instability. Given that Brax collects 16x more data per update (see #1), the larger clip epsilon is safe because the policy ratio variance is lower with more data. + +**Classification: IMPORTANT** — Especially in combination with the batch size difference (#1). + +**Fix**: Consider increasing to 0.3 for DM Control playground spec. However, this should only be done together with the batch size fix (#1), since larger clip epsilon with small batches risks instability. + +--- + +## 13. Discount Factor + +### Brax (DM Control) +Default: `discounting=0.995` +Overrides: BallInCup=0.95, FingerSpin=0.95 + +### Brax (Locomotion) +Default: `discounting=0.97` +Overrides: Go1Backflip=0.95 + +### SLM-Lab +DM Control: `gamma=0.995` +Locomotion: `gamma=0.97` +Overrides: FingerSpin=0.95 + +### Difference +**Match** for the main categories. + +**Classification: MINOR** — Already matching. + +--- + +## Summary: Priority-Ordered Fixes + +### CRITICAL + +| # | Issue | Brax Value | SLM-Lab Value | Fix | +|---|-------|-----------|--------------|-----| +| 1 | **Batch size (data per training step)** | 983K transitions (16 unrolls of 30) | 61K transitions (1 unroll of 30) | Increase `time_horizon` to 480, or implement multi-unroll collection | + +### IMPORTANT + +| # | Issue | Brax Value | SLM-Lab Value | Fix | +|---|-------|-----------|--------------|-----| +| 4 | **Value network depth** | 5 layers of 256 | 3 layers of 256 | Add 2 more hidden layers | +| 6 | **LR schedule** | Constant | Linear decay to 0.033x | Set `min_factor: 1.0` or weaken to 0.1 | +| 8 | **Gradient clipping (DM Control)** | None | 1.0 | Set `clip_grad_val: null` for DM Control | +| 9 | **Action std upper bound** | Softplus (unbounded) | exp(0.5)=1.65 | Increase max clamp from 0.5 to 2.0 | +| 11 | **Clipping epsilon** | 0.3 | 0.2 | Increase to 0.3 (only with larger batch) | + +### MINOR (already matching or small effect) + +| # | Issue | Status | +|---|-------|--------| +| 2 | Reward scaling | Match (10.0 for DM Control) | +| 3 | Obs normalization | Match (Welford running stats) | +| 5 | GAE computation | Match (lam=0.95, per-minibatch normalization) | +| 7 | Entropy coefficient | Match (0.01, fixed) | +| 10 | Network init | Minor difference (orthogonal vs lecun_uniform) | +| 13 | Discount factor | Match | + +--- + +## Recommended Implementation Order + +### Phase 1: Low-risk spec changes (test on CartpoleBalance/Swingup first) +1. Remove gradient clipping for DM Control: `clip_grad_val: null` +2. Weaken LR decay: `min_factor: 0.1` (or `1.0` for constant) +3. Increase log_std clamp from 0.5 to 2.0 + +### Phase 2: Architecture changes (test on several envs) +4. Increase value network to 5 layers of 256 +5. Consider disabling `normalize_v_targets` since Brax doesn't use it + +### Phase 3: Batch size alignment (largest expected impact, highest risk) +6. Increase `time_horizon` to 240 or 480 to match Brax's effective batch size +7. If time_horizon increase works, consider increasing `clipping_epsilon` to 0.3 + +### Risk Assessment +- **Safest changes**: #1 (no grad clip), #2 (weaker LR decay), #3 (wider std range) +- **Medium risk**: #4 (deeper value net — more compute, could slow training), #5 (removing normalization) +- **Highest risk/reward**: #6 (larger time_horizon — 16x more memory, biggest expected improvement) + +### Envs Already Solved +Changes should be tested against already-solved envs (CartpoleBalance, CartpoleSwingup, etc.) to ensure no regression. The safest approach is to implement spec variants rather than modifying the default spec. + +--- + +## Key Insight + +The single largest difference is **data collection volume per training step**. Brax collects 16x more transitions before each update cycle. This provides: +1. Better advantage estimates (longer trajectory context) +2. More diverse minibatches (less overfitting per update) +3. Safety for larger clip epsilon and no gradient clipping + +Without matching this, the other improvements will have diminished returns. The multi-unroll collection in Brax is fundamentally tied to its JAX/vectorized architecture — SLM-Lab's sequential PyTorch loop can approximate this by simply increasing `time_horizon`, at the cost of memory. + +A practical compromise: increase `time_horizon` from 30 to 128 or 256 (4-8x, not full 16x) and adjust other hyperparameters accordingly. diff --git a/docs/phase5_spec_research.md b/docs/phase5_spec_research.md new file mode 100644 index 0000000000000000000000000000000000000000..ba860b497063718519bda04a9ee06f945c936174 --- /dev/null +++ b/docs/phase5_spec_research.md @@ -0,0 +1,273 @@ +# Phase 5 Spec Research: Official vs SLM-Lab Config Comparison + +## Source Files + +- **Official config**: `mujoco_playground/config/dm_control_suite_params.py` ([GitHub](https://github.com/google-deepmind/mujoco_playground/blob/main/mujoco_playground/config/dm_control_suite_params.py)) +- **Official network**: Brax PPO defaults (`brax/training/agents/ppo/networks.py`) +- **Our spec**: `slm_lab/spec/benchmark_arc/ppo/ppo_playground.yaml` +- **Our wrapper**: `slm_lab/env/playground.py` + +## Critical Architectural Difference: Batch Collection Size + +The most significant difference is how much data is collected per update cycle. + +### Official Brax PPO batch mechanics + +In Brax PPO, `batch_size` means **minibatch size in trajectories** (not total batch): + +| Parameter | Official Value | +|---|---| +| `num_envs` | 2048 | +| `unroll_length` | 30 | +| `batch_size` | 1024 (trajectories per minibatch) | +| `num_minibatches` | 32 | +| `num_updates_per_batch` | 16 (epochs) | + +- Sequential unrolls per env = `batch_size * num_minibatches / num_envs` = 1024 * 32 / 2048 = **16** +- Total transitions collected = 2048 envs * 16 unrolls * 30 steps = **983,040** +- Each minibatch = 30,720 transitions +- Grad steps per update = 32 * 16 = **512** + +### SLM-Lab batch mechanics + +| Parameter | Our Value | +|---|---| +| `num_envs` | 2048 | +| `time_horizon` | 30 | +| `minibatch_size` | 2048 | +| `training_epoch` | 16 | + +- Total transitions collected = 2048 * 30 = **61,440** +- Num minibatches = 61,440 / 2048 = **30** +- Each minibatch = 2,048 transitions +- Grad steps per update = 30 * 16 = **480** + +### Comparison + +| Metric | Official | SLM-Lab | Ratio | +|---|---|---|---| +| Transitions per update | 983,040 | 61,440 | **16x more in official** | +| Minibatch size (transitions) | 30,720 | 2,048 | **15x more in official** | +| Grad steps per update | 512 | 480 | ~same | +| Data reuse (epochs over same data) | 16 | 16 | same | + +**Impact**: Official collects 16x more data before each gradient update cycle. Each minibatch is 15x larger. The grad steps are similar, but each gradient step in official sees 15x more transitions — better gradient estimates, less variance. + +This is likely the **root cause** for most failures, especially hard exploration tasks (FingerTurn, CartpoleSwingupSparse). + +## Additional Missing Feature: reward_scaling=10.0 + +The official config uses `reward_scaling=10.0`. SLM-Lab has **no reward scaling** (implicitly 1.0). This amplifies reward signal by 10x, which: +- Helps with sparse/small rewards (CartpoleSwingupSparse, AcrobotSwingup) +- Works in conjunction with value target normalization +- May partially compensate for the batch size difference + +## Network Architecture + +| Component | Official (Brax) | SLM-Lab | Match? | +|---|---|---|---| +| Policy layers | (32, 32, 32, 32) | (64, 64) | Different shape, similar param count | +| Value layers | (256, 256, 256, 256, 256) | (256, 256, 256) | Official deeper | +| Activation | Swish (SiLU) | SiLU | Same | +| Init | default (lecun_uniform) | orthogonal_ | Different | + +The policy architectures have similar total parameters (32*32*4 vs 64*64*2 chains are comparable). The value network is 2 layers shallower in SLM-Lab. Unlikely to be the primary cause of failures but could matter for harder tasks. + +## Per-Environment Analysis + +### Env: FingerTurnEasy (570 vs 950 target) + +| Parameter | Official | Ours | Mismatch? | +|---|---|---|---| +| gamma (discounting) | 0.995 | 0.995 | Match | +| training_epoch (num_updates_per_batch) | 16 | 16 | Match | +| time_horizon (unroll_length) | 30 | 30 | Match | +| action_repeat | 1 | 1 | Match | +| num_envs | 2048 | 2048 | Match | +| reward_scaling | 10.0 | 1.0 (none) | **MISMATCH** | +| batch collection size | 983K | 61K | **MISMATCH (16x)** | +| minibatch transitions | 30,720 | 2,048 | **MISMATCH (15x)** | + +**Per-env overrides**: None in official. Uses all defaults. +**Diagnosis**: Huge gap (570 vs 950). FingerTurn is a precision manipulation task requiring coordinated finger-tip control. The 16x smaller batch likely causes high gradient variance, preventing the policy from learning fine-grained coordination. reward_scaling=10 would also help. + +### Env: FingerTurnHard (~500 vs 950 target) + +Same as FingerTurnEasy — no per-env overrides. Same mismatches apply. +**Diagnosis**: Even harder version, same root cause. Needs larger batches and reward scaling. + +### Env: CartpoleSwingup (443 vs 800 target, regression from p5-ppo5=803) + +| Parameter | Official | p5-ppo5 | p5-ppo6 (current) | +|---|---|---|---| +| minibatch_size | N/A (30,720 transitions) | 4096 | 2048 | +| num_minibatches | 32 | 15 | 30 | +| grad steps/update | 512 | 240 | 480 | +| total transitions/update | 983K | 61K | 61K | +| reward_scaling | 10.0 | 1.0 | 1.0 | + +**Per-env overrides**: None in official. +**Diagnosis**: The p5-ppo5→p5-ppo6 regression (803→443) came from doubling grad steps (240→480) while halving minibatch size (4096→2048). More gradient steps on smaller minibatches = overfitting per update. p5-ppo5's 15 larger minibatches were better for CartpoleSwingup. + +**Answer to key question**: Yes, reverting to minibatch_size=4096 would likely restore CartpoleSwingup performance. However, the deeper fix is the batch collection size — both p5-ppo5 and p5-ppo6 collect only 61K transitions vs official's 983K. + +### Env: CartpoleSwingupSparse (270 vs 425 target) + +| Parameter | Official | Ours | Mismatch? | +|---|---|---|---| +| All params | Same defaults | Same as ppo_playground | Same mismatches | +| reward_scaling | 10.0 | 1.0 | **MISMATCH — critical for sparse** | + +**Per-env overrides**: None in official. +**Diagnosis**: Sparse reward + no reward scaling = very weak learning signal. reward_scaling=10 is especially important here. The small batch also hurts exploration diversity. + +### Env: CartpoleBalanceSparse (545 vs 700 target) + +Same mismatches as other Cartpole variants. No per-env overrides. +**Diagnosis**: Note that the actual final MA is 992 (well above target). The low "strength" score (545) reflects slow initial convergence, not inability to solve. If metric switches to final_strength, this may already pass. reward_scaling would accelerate early convergence. + +### Env: AcrobotSwingup (172 vs 220 target) + +| Parameter | Official | Ours | Mismatch? | +|---|---|---|---| +| num_timesteps | 100M | 100M | Match (official has explicit override) | +| All training params | Defaults | ppo_playground | Same mismatches | +| reward_scaling | 10.0 | 1.0 | **MISMATCH** | + +**Per-env overrides**: Official only sets `num_timesteps=100M` (already matched). +**Diagnosis**: Close to target (172 vs 220). reward_scaling=10 would likely close the gap. The final MA (253) exceeds target — metric issue compounds this. + +### Env: SwimmerSwimmer6 (485 vs 560 target) + +| Parameter | Official | Ours | Mismatch? | +|---|---|---|---| +| num_timesteps | 100M | 100M | Match (official has explicit override) | +| All training params | Defaults | ppo_playground | Same mismatches | +| reward_scaling | 10.0 | 1.0 | **MISMATCH** | + +**Per-env overrides**: Official only sets `num_timesteps=100M` (already matched). +**Diagnosis**: Swimmer is a multi-joint locomotion task that benefits from larger batches (more diverse body configurations per update). reward_scaling would also help. + +### Env: PointMass (863 vs 900 target) + +No per-env overrides. Same mismatches. +**Diagnosis**: Very close (863 vs 900). This might pass with reward_scaling alone. Simple task — batch size less critical. + +### Env: FishSwim (~530 vs 650 target, may still be running) + +No per-env overrides. Same mismatches. +**Diagnosis**: 3D swimming task. Would benefit from both larger batches and reward_scaling. + +## Summary of Mismatches (All Envs) + +| Mismatch | Official | SLM-Lab | Impact | Fixable? | +|---|---|---|---|---| +| **Batch collection size** | 983K transitions | 61K transitions | HIGH — 16x less data per update | Requires architectural change to collect multiple unrolls | +| **Minibatch size** | 30,720 transitions | 2,048 transitions | HIGH — much noisier gradients | Limited by venv_pack constraint | +| **reward_scaling** | 10.0 | 1.0 (none) | MEDIUM-HIGH — especially for sparse envs | Easy to add | +| **Value network depth** | 5 layers | 3 layers | LOW-MEDIUM | Easy to change in spec | +| **Weight init** | lecun_uniform | orthogonal_ | LOW | Unlikely to matter much | + +## Proposed Fixes + +### Fix 1: Add reward_scaling (EASY, HIGH IMPACT) + +Add a `reward_scale` parameter to the spec and apply it in the training loop or environment wrapper. + +```yaml +# In ppo_playground spec +env: + reward_scale: 10.0 # Official mujoco_playground default +``` + +This requires a code change to support `reward_scale` in the env or algorithm. Simplest approach: multiply rewards by scale factor in the PlaygroundVecEnv wrapper. + +**Priority: 1 (do this first)** — Easy to implement, likely closes the gap for PointMass, AcrobotSwingup, and CartpoleBalanceSparse. Partial improvement for others. + +### Fix 2: Revert minibatch_size to 4096 for base ppo_playground (EASY) + +```yaml +ppo_playground: + agent: + algorithm: + minibatch_size: 4096 # 15 minibatches, fewer but larger grad steps +``` + +**Priority: 2** — Immediately restores CartpoleSwingup from 443 to ~803. May modestly improve other envs. The trade-off: fewer grad steps (240 vs 480) but larger minibatches = more stable gradients. + +### Fix 3: Multi-unroll collection (MEDIUM DIFFICULTY, HIGHEST IMPACT) + +The fundamental gap is that SLM-Lab collects only 1 unroll (30 steps) from each env before updating, while Brax collects 16 sequential unrolls (480 steps). To match official: + +Option A: Increase `time_horizon` to 480 (= 30 * 16). This collects the same total data but changes GAE computation (advantages computed over 480 steps instead of 30). Not equivalent to official. + +Option B: Add a `num_unrolls` parameter that collects multiple independent unrolls of `time_horizon` length before updating. This matches official behavior but requires a code change to the training loop. + +Option C: Accept the batch size difference and compensate with reward_scaling + larger minibatch_size. Less optimal but no code changes needed beyond reward_scaling. + +**Priority: 3** — Biggest potential impact but requires code changes. Try fixes 1-2 first and re-evaluate. + +### Fix 4: Deepen value network (EASY) + +```yaml +_value_body: &value_body + modules: + body: + Sequential: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: + - LazyLinear: {out_features: 256} + - SiLU: +``` + +**Priority: 4** — Minor impact expected. Try after fixes 1-2. + +### Fix 5: Per-env spec variants for FingerTurn (if fixes 1-2 insufficient) + +If FingerTurn still fails after reward_scaling + minibatch revert, create a dedicated variant with tuned hyperparameters (possibly lower gamma, different lr). But try the general fixes first since official uses default params for FingerTurn. + +**Priority: 5** — Only if fixes 1-3 don't close the gap. + +## Recommended Action Plan + +1. **Implement reward_scale=10.0** in PlaygroundVecEnv (multiply rewards by scale factor). Add `reward_scale` to env spec. One-line code change + spec update. + +2. **Revert minibatch_size to 4096** in ppo_playground base spec. This gives 15 minibatches * 16 epochs = 240 grad steps (vs 480 now). + +3. **Rerun the 5 worst-performing envs** with fixes 1+2: + - FingerTurnEasy (570 → target 950) + - FingerTurnHard (500 → target 950) + - CartpoleSwingup (443 → target 800) + - CartpoleSwingupSparse (270 → target 425) + - FishSwim (530 → target 650) + +4. **Evaluate results**. If FingerTurn still fails badly, investigate multi-unroll collection (Fix 3) or FingerTurn-specific tuning. + +5. **Metric decision**: Switch to `final_strength` for score reporting. CartpoleBalanceSparse (final MA=992) and AcrobotSwingup (final MA=253) likely pass under the correct metric. + +## Envs Likely Fixed by Metric Change Alone + +These envs have final MA above target but low "strength" due to slow early convergence: + +| Env | strength | final MA | target | Passes with final_strength? | +|---|---|---|---|---| +| CartpoleBalanceSparse | 545 | 992 | 700 | YES | +| AcrobotSwingup | 172 | 253 | 220 | YES | + +## Envs Requiring Spec Changes + +| Env | Current | Target | Most likely fix | +|---|---|---|---| +| FingerTurnEasy | 570 | 950 | reward_scale + larger batch | +| FingerTurnHard | 500 | 950 | reward_scale + larger batch | +| CartpoleSwingup | 443 | 800 | Revert minibatch_size=4096 | +| CartpoleSwingupSparse | 270 | 425 | reward_scale | +| SwimmerSwimmer6 | 485 | 560 | reward_scale | +| PointMass | 863 | 900 | reward_scale | +| FishSwim | 530 | 650 | reward_scale + larger batch | diff --git a/docs/plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..1482f73b507481ebd679b9dc17f1c5a8864881a4 --- /dev/null +++ b/docs/plots/AcrobotSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9ca5e3c04b0502384f38c830999bec6e46ea51e381802130edb0527785e0d48 +size 80554 diff --git a/docs/plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..5467245ed44c8267d01c566cd40b6ef126c3f283 --- /dev/null +++ b/docs/plots/AcrobotSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc40668c1e3e28b25184af5102f7c79d5eb179c3ce32bac19e9ef047b9923fc +size 82826 diff --git a/docs/plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..1865dd6ef19a2b727b1c65146b6af525bd1f9492 --- /dev/null +++ b/docs/plots/AeroCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0715bb519f7276da6b3e50681cfe9d545d602bd251a77f9d1979ec13ac56b0fc +size 76794 diff --git a/docs/plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..8b033be15f1ca6dca4e071c78353846d1059e107 --- /dev/null +++ b/docs/plots/AlohaHandOver_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33ed65d9d0270051680b2ccd9312ee4ea69b87fb0f3b1b98efffb276677649f4 +size 72085 diff --git a/docs/plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..1660a128748ef6e840eef7983d9c7e6476275ac1 --- /dev/null +++ b/docs/plots/AlohaSinglePegInsertion_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a91154bcc40687b814fb63613d5fbc493b58640484d856fa91800feb651e7479 +size 78498 diff --git a/docs/plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..7054ab80442f281926617f2b48a8c09f6ca01f77 --- /dev/null +++ b/docs/plots/ApolloJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68eabd3757dac005d84adb72f12995b4983ca7c98cab240b0e96a306993dc5b1 +size 82966 diff --git a/docs/plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..0b16f0b21fdfd01f50c1aa7313ddce879b96428c --- /dev/null +++ b/docs/plots/BallInCup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73fb107ae9c9e1043e759cea2dca67f71ee51e76809139671fe2458a95381560 +size 82247 diff --git a/docs/plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..6b9381a1c4054e7bb15bf6f28eec1698286cb30c --- /dev/null +++ b/docs/plots/BarkourJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c97bb598583209eeef065b78569cab59b65e0350556b2c719bd5fbbb570ca012 +size 76200 diff --git a/docs/plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..d8b1f8558f23785147a801fee72bf624249f26bd --- /dev/null +++ b/docs/plots/BerkeleyHumanoidJoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d2553170b568d365ab8e6c93823ec695a2e55dc0f09495ca17dacc3175aed55 +size 85542 diff --git a/docs/plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..346d16d5212b25b5230bdb5904b8025123093f4f --- /dev/null +++ b/docs/plots/BerkeleyHumanoidJoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7053045ca641d5ced3c5cb79c77e6688bf089f8d374c454c10efadca895393b1 +size 86727 diff --git a/docs/plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..6820324b5cff4867a7481224dde8781275289106 --- /dev/null +++ b/docs/plots/CartpoleBalanceSparse_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb74e3490ba8aac9c0bd44a52cdeeac026e2402df68be21673a7576fe4fc2a11 +size 89299 diff --git a/docs/plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..b7c481b37fb75d01649884cda246d00d06b9b91f --- /dev/null +++ b/docs/plots/CartpoleBalance_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b15dee6680f622e710afa4546b6003a6169b5db6975effe693eb647409de04cf +size 77543 diff --git a/docs/plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..dd7997e978d9a1d7cef2177bc8536ed2f9c9cb93 --- /dev/null +++ b/docs/plots/CartpoleSwingupSparse_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:347f66b8322e4e86609ed3f67e699ad5efe227c69046872d38ad2dd4cdda4ea7 +size 94574 diff --git a/docs/plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..a52a3f122f902c8d7d46d7e5e8e2a73c349747f8 --- /dev/null +++ b/docs/plots/CartpoleSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5f29c966570c8b68c6e482b1b8e6609e04259cfb4e5ed82ba6a5f119e9e193f +size 87486 diff --git a/docs/plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..5b68461e9c3429e64e0910cef35b5b2ff5293d1b --- /dev/null +++ b/docs/plots/CheetahRun_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31a62ec1dab9d42b6276fb128a561302b7687b25c586381d9bafa1b1e00e0b4d +size 78511 diff --git a/docs/plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..7322890b4cd37cda77c5487f21d949da8e9b89ed --- /dev/null +++ b/docs/plots/FingerSpin_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:affd7c7f2b49918c5f6b07cd4569d43fecccdd4d28d76b9f5c51af09d72677ed +size 82815 diff --git a/docs/plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..a3cccec99a37696a4adad47e8d14efe88fc7f524 --- /dev/null +++ b/docs/plots/FingerTurnEasy_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7086e767803e8482701d388ad7ce75a698ff15ff0bf844abc92ee5ac33e02c7 +size 78897 diff --git a/docs/plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..18d93a7683e881903f826065828e6dec0578b9bc --- /dev/null +++ b/docs/plots/FingerTurnHard_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:615bd9b8df2f4dac068c3b79b3ade1b74382d16b1621c61d58f5012dc9e8bbca +size 81073 diff --git a/docs/plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..45c651b431aba685d008611a686b6e4b4e71e459 --- /dev/null +++ b/docs/plots/FishSwim_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40e19f41ef14bab02e5dc742060c9442138fd7ef0cca4a8e19a905084e0152a6 +size 81685 diff --git a/docs/plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..3bdc3ea050f04e097b8a7d268f24da44b1f63d1e --- /dev/null +++ b/docs/plots/G1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4118585f999029b3df1249ab0e930d063910575170b8aafe3eae64a5057134aa +size 79066 diff --git a/docs/plots/G1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/G1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..2e0157a55b89e4c2db4da5268d354f7510e81823 --- /dev/null +++ b/docs/plots/G1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9070a20c11875989b384f136667869a1c678757be80f8c3f57e1a13b24455f4e +size 77640 diff --git a/docs/plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..062b70f75145d551ef084d28cdb3e12ae9b6b14a --- /dev/null +++ b/docs/plots/Go1Footstand_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4e4851302afadbda1c69ead59e8674fd8940f117d18ea9ed29732bd4d540c4e +size 82277 diff --git a/docs/plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..de32636cddf3f4851b88eca3f876deccdc56929c --- /dev/null +++ b/docs/plots/Go1Getup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:736bc6e8393545f2aec529f2ddd9c560772afd70d715873b953e5366bfc55a0a +size 79179 diff --git a/docs/plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..5ddddbc88b0c3b19df7d5e451cab69b7bef8ec86 --- /dev/null +++ b/docs/plots/Go1Handstand_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef760126f011e77c218fea2a55e23471a89d7788302717acf99efa26615c9fd2 +size 84245 diff --git a/docs/plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..6eb682737c33b13047805bdd78fd7cca0155242c --- /dev/null +++ b/docs/plots/Go1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20a08a60d2f6261c403920349b875eee0c415ae5c89bf74c1fa308edc772503a +size 69696 diff --git a/docs/plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..02fb33ce80584423b19d9bd7c27a299b73bebc59 --- /dev/null +++ b/docs/plots/Go1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f9eb07a28a2a4e9c4e5f7d99778d8492e571776074dc4d9d9a4fb9b661d1ea5 +size 67550 diff --git a/docs/plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..a511bf3371eb7b6b1bb9ce5b867aa40474f207ea --- /dev/null +++ b/docs/plots/H1InplaceGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3affb832443b3c167642f3ef060c02cf1c43b9b2685c6db4742681cef6a040bb +size 83418 diff --git a/docs/plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..a365cc7e1fd84166c7bd1c5ace3d22ee78bfc736 --- /dev/null +++ b/docs/plots/H1JoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d66d78e6feb14af9abb31551ce637d9939ddff75dc7bc836ef50b6d8547f65c9 +size 81247 diff --git a/docs/plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..3fd349ac1795547072a612380af358172e2e8ed0 --- /dev/null +++ b/docs/plots/HopperHop_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73e476fde475d0cadde961a4ccd25db0e40fb323df4f807f9cac86a2f8467508 +size 68657 diff --git a/docs/plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..d72832ab791cb883d55aa91d50b153a5a81ee967 --- /dev/null +++ b/docs/plots/HopperStand_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:831c9a4ae8247e1c5d5f4bae48257f5b7a7db2514672440357b8f297b9b715fe +size 75109 diff --git a/docs/plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..c3f9601c6534bf1ed0da0c821ad8ac1e38b61cef --- /dev/null +++ b/docs/plots/HumanoidRun_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4952b81e9582d31dbd674112f43daf0d89dd69b9b5e0e705e3d093b12fedc2a +size 74089 diff --git a/docs/plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..3774312f6cd88c60b0f212c3b1dfd9efd968ac0c --- /dev/null +++ b/docs/plots/HumanoidStand_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0e69226c970a7b49a7ab1b64421ccd13675463b66a338cf422db29496736444 +size 74663 diff --git a/docs/plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..336e01f85919d21bf6a9b9d26cba8f839d0c2fd9 --- /dev/null +++ b/docs/plots/HumanoidWalk_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d532e5a8c1aaf47e086ce8dc02aafb567849afc4a9ecd90bf6f99c2c25e8eda +size 78120 diff --git a/docs/plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..697e5b439b06808944e220fa878dc5d936d9e0cf --- /dev/null +++ b/docs/plots/LeapCubeReorient_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d97d93f4e0f9be1397cc7975fe35b129c1274a02c551b2ab891b4d0e099626b5 +size 84500 diff --git a/docs/plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..cfb93aa90c399947e4bfedec33c783bbeef2d28c --- /dev/null +++ b/docs/plots/LeapCubeRotateZAxis_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ad79a2fa58ec7b3a4047b8748cfcf359ad44be670d2bd626516d995da50c569 +size 83668 diff --git a/docs/plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..73df36d62305957772aae12023aa64f9ff497e94 --- /dev/null +++ b/docs/plots/Op3Joystick_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e3eade1b974bbb16d164998bb4aa5bb6d49778727cea7aee539a8c9127c3e16 +size 73677 diff --git a/docs/plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..cacfb98d1dd888bf74421561cb1a90aa55820db9 --- /dev/null +++ b/docs/plots/PandaOpenCabinet_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c6dd5517d7e6c19782f8df796e73ac8f574345afdcfb2db478e59432fa8c884 +size 77535 diff --git a/docs/plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..cd3e02dd6a796e74a12877ffe662cf8e6313525f --- /dev/null +++ b/docs/plots/PandaPickCubeCartesian_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3fb0cdbd30d9c1a6967e9821251253cb20c09e59030a282caf9d711b823ed3c +size 80165 diff --git a/docs/plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..160b3f988ad27e7bf23fc9437854db4da69223ec --- /dev/null +++ b/docs/plots/PandaPickCubeOrientation_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f27ec6bf42fb189002e8bf28aec6f5624f02744261b4c67efd3f48ea2e000d3e +size 84427 diff --git a/docs/plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..d25849e863e801e565c480ea9c3e1c42d31f6e1c --- /dev/null +++ b/docs/plots/PandaPickCube_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:245555dd5916e0934f89e5e05c6e7621fdcad703849f9ce6e337e54def36e964 +size 81026 diff --git a/docs/plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..7eacbb5131a661056644d961edaca57c057bd2b8 --- /dev/null +++ b/docs/plots/PandaRobotiqPushCube_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5badc0798be8ab5a190069b7b7a61d0bd2d9a940af9079e3ad82ba56d74a352f +size 80550 diff --git a/docs/plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..59bf9a7b10ecc652b4276a53d41157b8dff30d97 --- /dev/null +++ b/docs/plots/PendulumSwingup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0402f9424774bb01661b885284646ac81c0dba832151637240ee084c7d31ab58 +size 79641 diff --git a/docs/plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..1085d72d70a1689e2cfad3fb005df7ac7de588f1 --- /dev/null +++ b/docs/plots/PointMass_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16cb5c3177acd524f06cb6363c73c6faea261f771df8dd9d543cca55511fc279 +size 79488 diff --git a/docs/plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..ab8db0b650a1f68d343959ccbdb29c43ae9ef96a --- /dev/null +++ b/docs/plots/ReacherEasy_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b285f081ad0db4addcfffc4a6acb2c571231a08f10e40be392ee383bcc20f36 +size 70753 diff --git a/docs/plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..9ca3a4485e6ab462df3223f373b48eba7d8e067c --- /dev/null +++ b/docs/plots/ReacherHard_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:691c7fd249ea2198b6f44898e3f1cc8419f702b5e9bac566741bd7676784fe46 +size 71579 diff --git a/docs/plots/SpotFlatTerrainJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/SpotFlatTerrainJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..423f5a27ce8a49514e02e78e53c30e10bbe0d7d3 --- /dev/null +++ b/docs/plots/SpotFlatTerrainJoystick_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72a0a4c7e72e526f6fdc8734518a1cfcd4ea1ea65cd583583dd3661a5f37ad65 +size 86765 diff --git a/docs/plots/SpotGetup_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/SpotGetup_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..e54965dd632f7284378acef842af89b380b68800 --- /dev/null +++ b/docs/plots/SpotGetup_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e6ae7bc8d8efcc6aa13f90a69b534dcfc8830c5a573676d08f814a7e8b551c6 +size 91580 diff --git a/docs/plots/SpotJoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/SpotJoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..2cfc625f4fd8dc5a0928834ec1226f46cee24237 --- /dev/null +++ b/docs/plots/SpotJoystickGaitTracking_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b296a583828fa913168f5602862dd154bdbbd5ec0ce9db0dfbe615c85577cc2 +size 88924 diff --git a/docs/plots/SwimmerSwimmer6_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/SwimmerSwimmer6_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..e12cc805044d81c1968c83160fb6b56a0eda41d8 --- /dev/null +++ b/docs/plots/SwimmerSwimmer6_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3597eee8d3dc98d04c1639dbdf1db90aba7f1a611a145e5f4f8d20a7dc662dcc +size 81261 diff --git a/docs/plots/T1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/T1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..8277085fbf283d4efceeda333faa83e5d46939e4 --- /dev/null +++ b/docs/plots/T1JoystickFlatTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fbaad2b725c64b042f7cd6764a55b21b0027b4aed0ea7e547a4bb98b81077e0 +size 78849 diff --git a/docs/plots/T1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/T1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..c4b01793462068ba23eb4b6d4f9ea841bef16da5 --- /dev/null +++ b/docs/plots/T1JoystickRoughTerrain_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19ddf538128121ffa038d9575ea87d8592f131f86c92fcc20f08bc4166a2d08a +size 84690 diff --git a/docs/plots/WalkerRun_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/WalkerRun_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..40d5de91289358e62fc8a1fcf9fd9f0e831c1f05 --- /dev/null +++ b/docs/plots/WalkerRun_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0582926e7ed4e067fd197fdf15a227b7312db7aba758035371b3833644e5aa5 +size 83735 diff --git a/docs/plots/WalkerStand_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/WalkerStand_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..995f9cdf7ec136298dfa3400955e0c246f65ec9f --- /dev/null +++ b/docs/plots/WalkerStand_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c695f21c9122834fad1553da2edc1e6777bcaafdb612a40792ebf40efa4b6665 +size 79343 diff --git a/docs/plots/WalkerWalk_multi_trial_graph_mean_returns_ma_vs_frames.png b/docs/plots/WalkerWalk_multi_trial_graph_mean_returns_ma_vs_frames.png new file mode 100644 index 0000000000000000000000000000000000000000..022807ea2474ce835ed4bdcded4a404656d74063 --- /dev/null +++ b/docs/plots/WalkerWalk_multi_trial_graph_mean_returns_ma_vs_frames.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:758a924863871aaa03ddf0935a9650fb0519f55349e6b6184a02cb24668ca08f +size 81709