Antonio Serrano Muñoz
Add files
d26f640
|
raw
history blame
2.52 kB
metadata
library_name: skrl
tags:
  - deep-reinforcement-learning
  - reinforcement-learning
  - skrl
model-index:
  - name: PPO
    results:
      - metrics:
          - type: mean_reward
            value: 298.89 +/- 27.4
            name: Total reward (mean)
        task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: IsaacGymEnvs-BallBalance
          type: IsaacGymEnvs-BallBalance

IsaacGymEnvs-BallBalance-PPO

Trained agent for NVIDIA Isaac Gym Preview environments.

  • Task: BallBalance
  • Agent: PPO

Usage (with skrl)

Note: Visit the skrl Examples section to access the scripts.

  • PyTorch

    from skrl.utils.huggingface import download_model_from_huggingface
    
    # assuming that there is an agent named `agent`
    path = download_model_from_huggingface("skrl/IsaacGymEnvs-BallBalance-PPO", filename="agent.pt")
    agent.load(path)
    
  • JAX

    from skrl.utils.huggingface import download_model_from_huggingface
    
    # assuming that there is an agent named `agent`
    path = download_model_from_huggingface("skrl/IsaacGymEnvs-BallBalance-PPO", filename="agent.pickle")
    agent.load(path)
    

Hyperparameters

Note: Undefined parameters keep their values by default.

# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16  # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 8  # 16 * 4096 / 8192
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.1
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}