Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
OpenDriveLab-org's picture
Update README.md
f7e77b2 verified
|
raw
history blame
No virus
1.66 kB
metadata
license: cc-by-nc-sa-4.0

OpenDV-YouTube

This is the dataset repository of OpenDV-YouTube language annotations, including context and command. For more details, please refer to GenAD project and OpenDV-YouTube.

Usage

To use the annotations, you need to first download and prepare the data as instructed in OpenDV-YouTube.

You can use the following code to load in annotations respectively.

import json

# for train
full_annos = []
for split_id in range(10):
  split = json.load(open("10hz_YouTube_train_split{}.json".format(str(split_id)), "r"))
  full_annos.extend(split)

# for val
val_annos = json.load(open("10hz_YouTube_val.json", "r"))

Annotations will be loaded in full_annos as a list where each element contains annotations for one video clip. All elements in the list are dictionaries of the following structure.

{
  "cmd": <int> -- command, i.e. the command of the ego vehicle in the video clip.
  "blip": <str> -- context, i.e. the BLIP description of the center frame in the video clip.
  "folder": <str> -- the relative path from the processed OpenDV-YouTube dataset root to the image folder of the video clip.
  "first_frame": <str> -- the filename of the first frame in the clip. Note that this file is included in the video clip.
  "last_frame": <str> -- the filename of the last frame in the clip. Note that this file is included in the video clip.
}