Datasets:

ArXiv:
License:
File size: 5,204 Bytes
0e2c753
 
 
96c14ad
 
cacac16
 
 
 
 
 
 
 
 
 
 
 
 
7dcbbde
 
cacac16
7dcbbde
 
11c2b1e
773b0e7
09658da
773b0e7
cacac16
 
09658da
cacac16
 
 
09658da
cacac16
773b0e7
cacac16
 
 
773b0e7
cacac16
96c14ad
cacac16
 
 
773b0e7
96c14ad
773b0e7
cacac16
09658da
e422806
96c14ad
cacac16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96c14ad
773b0e7
 
cacac16
773b0e7
 
09658da
773b0e7
 
cacac16
 
 
773b0e7
cacac16
773b0e7
 
 
 
 
 
 
 
cacac16
09658da
 
773b0e7
09658da
 
773b0e7
 
 
 
09658da
 
 
 
 
 
 
773b0e7
 
 
09658da
 
 
 
773b0e7
09658da
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: apache-2.0
---
# Dataset Card for Pong-v4-expert-MCTS
## Table of Contents

- [Supported Tasks and Baseline](#support-tasks-and-baseline)

- [Data Usage](#data-usage)

  - [Data Discription](##data-description)
  - [Data Fields](##data-fields)
  - [Data Splits](##data-splits)
  - [Initial Data Collection and Normalization](##Initial-Data-Collection-and-Normalization)

- [Additional Information](#Additional-Information)

  - [Who are the source data producers?](## Who-are-the-source-data-producers?)
  - [Social Impact of Dataset](##Social-Impact-of-Dataset)
  - [Known Limitations](##Known-Limitations)

  - [Licensing Information](##Licensing-Information)
  - [Citation Information](##Citation-Information)
  - [Contributions](##Contributions)

## Supported Tasks and Baseline

- This dataset supports the training for [Procedure Cloning (PC )](https://arxiv.org/abs/2205.10816) algorithm.
- Baselines when sequence length for decision is 0:

| Train loss                                         | Test Acc | Reward |
| -------------------------------------------------- | -------- | ------ |
| <img src="./img/sup_loss.png" style="zoom:50%;" /> | 0.90     | 20     |

- Baselines when sequence length for decision is 4:

| Train action loss                                     | Train hidden state loss                           | Train acc (auto-regressive mode)                    | Reward |
| ----------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------- | ------ |
| <img src="./img/action_loss.png" style="zoom:50%;" /> | <img src="./img/hs_loss.png" style="zoom:50%;" /> | <img src="./img/train_acc.png" style="zoom:50%;" /> | -21    |

## Data Usage

### Data description

This dataset includes 8 episodes of pong-v4 environment. The expert policy is [EfficientZero]([[2111.00210\] Mastering Atari Games with Limited Data (arxiv.org)](https://arxiv.org/abs/2111.00210)), which is able to generate MCTS hidden states. Because of the contained hidden states for each observation, this dataset is suitable for Imitation Learning methods that learn from a sequence like PC.

### Data Fields

- `obs`: An Array3D containing observations from 8 trajectories of an evaluated agent. The data type is uint8 and each value is in 0 to 255. The shape of this tensor is [96, 96, 3], that is, the channel dimension in the last dimension. 
- `actions`: An integer containing actions from 8 trajectories of an evaluated agent. This value is from 0 to 5. Details about this environment can be viewed at [Pong - Gym Documentation](https://www.gymlibrary.dev/environments/atari/pong/).
- `hidden_state`: An Array3D containing corresponding hidden states generated by EfficientZero, from 8 trajectories of an evaluated agent. The data type is float32.

This is an example for loading the data using iterator:

```python
from safetensors import saveopen

def generate_examples(self, filepath):
    data = {}
    with safe_open(filepath, framework="pt", device="cpu") as f:
        for key in f.keys():
            data[key] = f.get_tensor(key)
        
    for idx in range(len(data['obs'])):
        yield idx, {
            'observation': data['obs'][idx],
            'action': data['actions'][idx],
            'hidden_state': data['hidden_state'][idx],
        }
```

### Data Splits
There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator.

### Initial Data Collection and Normalization

- This dataset is collected by EfficientZero policy.
- The standard for expert data is that each return of 8 episodes is over 20.
- No normalization is previously applied ( i.e. each value of observation is a uint8 scalar in [0, 255] )

## Additional Information

### Who are the source language producers?

[@kxzxvbk](https://huggingface.co/kxzxvbk)

### Social Impact of Dataset

- This dataset can be used for Imitation Learning, especially for algorithms that learn from a sequence.
- Very few dataset is open-sourced currently for MCTS based policy.
- This dataset can potentially promote the research for sequence based imitation learning algorithms.

### Known Limitations

- This dataset is only used for academic research.
- For any commercial use or other cooperation, please contact: opendilab@pjlab.org.cn

### License
This dataset is under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).

### Citation Information

```
@misc{Pong-v4-expert-MCTS,
    title={{Pong-v4-expert-MCTS: OpenDILab} A dataset for Procedure Cloning algorithm using Pong-v4.},
    author={Pong-v4-expert-MCTS Contributors},
    publisher = {huggingface},
    howpublished = {\url{https://huggingface.co/datasets/OpenDILabCommunity/Pong-v4-expert-MCTS}},
    year={2023},
}
```

### Contributions
This data is partially based on the following repo, many thanks to their pioneering work:

- https://github.com/opendilab/DI-engine
- https://github.com/opendilab/LightZero

Please view the [doc](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cardsHow) for anyone who want to contribute to this dataset.