DBusAI commited on
Commit
0aa569a
1 Parent(s): 6101614

Retrain PPO model for CarRacing-v0 v0

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
PPO-CarRacing-v0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9283944e460de919c58fc88e737d55c8b8a383b8bebca87c7f21820a917b144c
3
+ size 26507590
PPO-CarRacing-v0/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 1.5.0
PPO-CarRacing-v0/data ADDED
The diff for this file is too large to render. See raw diff
 
PPO-CarRacing-v0/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ddf269d5883233df02590f1d39312272a161630e96f9f6f15187efc1a6dcb0b
3
+ size 17412311
PPO-CarRacing-v0/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd839c1e59fab092319ec52d1d5ff16b9be544eeba2ccec0072589978488ce12
3
+ size 8707070
PPO-CarRacing-v0/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d030ad8db708280fcae77d87e973102039acd23a11bdecc3db8eb6c0ac940ee1
3
+ size 431
PPO-CarRacing-v0/system_info.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ OS: Linux-5.10.107+-x86_64-with-debian-bullseye-sid #1 SMP Sun Apr 24 15:04:08 UTC 2022
2
+ Python: 3.7.12
3
+ Stable-Baselines3: 1.5.0
4
+ PyTorch: 1.9.1
5
+ GPU Enabled: True
6
+ Numpy: 1.21.6
7
+ Gym: 0.21.0
README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - CarRacing-v0
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: PPO
10
+ results:
11
+ - metrics:
12
+ - type: mean_reward
13
+ value: 81.28 +/- 82.32
14
+ name: mean_reward
15
+ task:
16
+ type: reinforcement-learning
17
+ name: reinforcement-learning
18
+ dataset:
19
+ name: CarRacing-v0
20
+ type: CarRacing-v0
21
+ ---
22
+
23
+ # **PPO** Agent playing **CarRacing-v0**
24
+ This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
25
+
26
+ ## Usage (with Stable-baselines3)
27
+ TODO: Add your code
28
+
config.json ADDED
The diff for this file is too large to render. See raw diff
 
replay.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16f5787cfaa81a786a168519d25db003a18431bf7849354943b1b8048077adeb
3
+ size 768368
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": 81.27780058607459, "std_reward": 82.32042399214119, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2022-05-13T12:55:08.198971"}