--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.22 +/- 0.12 name: mean_reward verified: false --- # **PPO** Agent playing **PandaReachDense-v3** This is a trained model of a **PPO** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import PPO from huggingface_sb3 import load_from_hub, package_to_hub from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize env_id = "PandaReachDense-v3" env = gym.make(env_id) env = make_vec_env(env_id, n_envs=4) env = VecNormalize(env, training=True, norm_obs=True, norm_reward=True, gamma=0.5, epsilon=1e-10, norm_obs_keys=None) model = PPO("MultiInputPolicy", env, verbose=1) model.learn(1_000_000) eval_env = DummyVecEnv([lambda: gym.make("PandaReachDense-v3")]) eval_env = VecNormalize.load("vec_normalize.pkl", eval_env) eval_env.render_mode = "rgb_array" eval_env.training = False # reward normalization is not needed at test time eval_env.norm_reward = False model = PPO.load("Slay-PandaReachDense-v3") mean_reward, std_reward = evaluate_policy(model, eval_env) print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}") ... ```