EpicPinkPenguin commited on
Commit
5c85b3d
1 Parent(s): 9f45be4

Update README.md

Browse files

Add first draft of the README

Files changed (1) hide show
  1. README.md +60 -2
README.md CHANGED
@@ -43,7 +43,65 @@ tags:
43
  - bigfish
44
  - benchmark
45
  - openai
46
- pretty_name: Procgen Benchmark Bigfish
47
  size_categories:
48
  - 100K<n<1M
49
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  - bigfish
44
  - benchmark
45
  - openai
46
+ pretty_name: Procgen Benchmark - Bigfish
47
  size_categories:
48
  - 100K<n<1M
49
+ ---
50
+ # Procgen Benchmark - Bigfish
51
+ This dataset contains trajectories generated by a PPO reinforcement learning agent trained on the Bigfish environment from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The agent has been trained for 50M steps and the final evaluation performance is 32.33.
52
+
53
+ ## Dataset Structure
54
+ ### Data Instances
55
+ Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
56
+
57
+ ```json
58
+ {'action': 1,
59
+ 'done': False,
60
+ 'observation': [[[0, 166, 253],
61
+ [0, 174, 255],
62
+ [0, 170, 251],
63
+ [0, 191, 255],
64
+ [0, 191, 255],
65
+ [0, 221, 255],
66
+ [0, 243, 255],
67
+ [0, 248, 255],
68
+ [0, 243, 255],
69
+ [10, 239, 255],
70
+ [25, 255, 255],
71
+ [0, 241, 255],
72
+ [0, 235, 255],
73
+ [17, 240, 255],
74
+ [10, 243, 255],
75
+ [27, 253, 255],
76
+ [39, 255, 255],
77
+ [58, 255, 255],
78
+ [85, 255, 255],
79
+ [111, 255, 255],
80
+ [135, 255, 255],
81
+ [151, 255, 255],
82
+ [173, 255, 255],
83
+ ...
84
+ [0, 0, 37],
85
+ [0, 0, 39]]],
86
+ 'reward': 0.0,
87
+ 'truncated': False}
88
+ ```
89
+
90
+ ### Data Fields
91
+ observation: The current RGB observation from the environment.
92
+ action: The action predicted by the agent for the current observation.
93
+ reward: The received reward from stepping the environment with the current action.
94
+ done: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
95
+ truncated: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.
96
+
97
+ ### Data Splits
98
+ The dataset is divided into a 'train' (90%) and 'test' (10%) split:
99
+
100
+ train: Trajectories used for training the reinforcement learning models.
101
+ test: Trajectories used for evaluating the performance of trained models.
102
+
103
+ ## Dataset Creation
104
+ The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps on the Procgen Bigfish environment. The agent obtained a final performance of 32.33. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribtution.
105
+
106
+ ## Procgen Benchmark
107
+ The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.