Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,100 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
# V-D4RL
|
6 |
+
|
7 |
+
<p align="center">
|
8 |
+
<img src="figs/envs.png" />
|
9 |
+
</p>
|
10 |
+
|
11 |
+
V-D4RL provides pixel-based analogues of the popular D4RL benchmarking tasks, derived from the **`dm_control`** suite, along with natural extensions of two state-of-the-art online pixel-based continuous control algorithms, DrQ-v2 and DreamerV2, to the offline setting. For further details, please see the paper:
|
12 |
+
|
13 |
+
**_Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations_**; Cong Lu*, Philip J. Ball*, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh.
|
14 |
+
|
15 |
+
<p align="center">
|
16 |
+
<a href=https://arxiv.org/abs/2206.04779>View on arXiv</a>
|
17 |
+
</p>
|
18 |
+
|
19 |
+
## Benchmarks
|
20 |
+
The V-D4RL datasets can be found in this repository under `vd4rl`. **These must be downloaded before running the code.** Assuming the data is stored under `vd4rl_data`, the file structure is:
|
21 |
+
|
22 |
+
```
|
23 |
+
vd4rl_data
|
24 |
+
└───main
|
25 |
+
│ └───walker_walk
|
26 |
+
│ │ └───random
|
27 |
+
│ │ │ └───64px
|
28 |
+
│ │ │ └───84px
|
29 |
+
│ │ └───medium_replay
|
30 |
+
│ │ │ ...
|
31 |
+
│ └───cheetah_run
|
32 |
+
│ │ ...
|
33 |
+
│ └───humanoid_walk
|
34 |
+
│ │ ...
|
35 |
+
└───distracting
|
36 |
+
│ ...
|
37 |
+
└───multitask
|
38 |
+
│ ...
|
39 |
+
```
|
40 |
+
|
41 |
+
## Baselines
|
42 |
+
|
43 |
+
### Environment Setup
|
44 |
+
Requirements are presented in conda environment files named `conda_env.yml` within each folder. The command to create the environment is:
|
45 |
+
```
|
46 |
+
conda env create -f conda_env.yml
|
47 |
+
```
|
48 |
+
|
49 |
+
Alternatively, dockerfiles are located under `dockerfiles`, replace `<<USER_ID>>` in the files with your own user ID from the command `id -u`.
|
50 |
+
|
51 |
+
### V-D4RL Main Evaluation
|
52 |
+
Example run commands are given below, given an environment type and dataset identifier:
|
53 |
+
|
54 |
+
```
|
55 |
+
ENVNAME=walker_walk # choice in ['walker_walk', 'cheetah_run', 'humanoid_walk']
|
56 |
+
TYPE=random # choice in ['random', 'medium_replay', 'medium', 'medium_expert', 'expert']
|
57 |
+
```
|
58 |
+
|
59 |
+
#### Offline DV2
|
60 |
+
```
|
61 |
+
python offlinedv2/train_offline.py --configs dmc_vision --task dmc_${ENVNAME} --offline_dir vd4rl_data/main/${ENV_NAME}/${TYPE}/64px --offline_penalty_type meandis --offline_lmbd_cons 10 --seed 0
|
62 |
+
```
|
63 |
+
|
64 |
+
#### DrQ+BC
|
65 |
+
```
|
66 |
+
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px nstep=3 seed=0
|
67 |
+
```
|
68 |
+
|
69 |
+
#### DrQ+CQL
|
70 |
+
```
|
71 |
+
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px algo=cql cql_importance_sample=false min_q_weight=10 seed=0
|
72 |
+
```
|
73 |
+
|
74 |
+
#### BC
|
75 |
+
```
|
76 |
+
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px algo=bc seed=0
|
77 |
+
```
|
78 |
+
|
79 |
+
### Distracted and Multitask Experiments
|
80 |
+
To run the distracted and multitask experiments, it suffices to change the offline directory passed to the commands above.
|
81 |
+
|
82 |
+
## Note on data collection and format
|
83 |
+
We follow the image sizes and dataset format of each algorithm's native codebase.
|
84 |
+
The means that Offline DV2 uses `*.npz` files with 64px images to store the offline data, whereas DrQ+BC uses `*.hdf5` with 84px images.
|
85 |
+
|
86 |
+
The data collection procedure is detailed in Appendix B of our paper, and we provide conversion scripts in `conversion_scripts`.
|
87 |
+
For the original SAC policies to generate the data see [here](https://github.com/philipjball/SAC_PyTorch/blob/dmc_branch/train_agent.py).
|
88 |
+
See [here](https://github.com/philipjball/SAC_PyTorch/blob/dmc_branch/gather_offline_data.py) for distracted/multitask variants.
|
89 |
+
We used `seed=0` for all data generation.
|
90 |
+
|
91 |
+
## Acknowledgements
|
92 |
+
V-D4RL builds upon many works and open-source codebases in both offline reinforcement learning and online pixel-based continuous control. We would like to particularly thank the authors of:
|
93 |
+
- [D4RL](https://github.com/rail-berkeley/d4rl)
|
94 |
+
- [DMControl](https://github.com/deepmind/dm_control)
|
95 |
+
- [DreamerV2](https://github.com/danijar/dreamerv2)
|
96 |
+
- [DrQ-v2](https://github.com/facebookresearch/drqv2)
|
97 |
+
- [LOMPO](https://github.com/rmrafailov/LOMPO)
|
98 |
+
|
99 |
+
## Contact
|
100 |
+
Please contact [Cong Lu](mailto:cong.lu@stats.ox.ac.uk) or [Philip Ball](mailto:ball@robots.ox.ac.uk) for any queries.
|