Dataset Preparation
Before Preparation
It is recommended to symlink the dataset root to $MMDETECTION3D/data
.
If your folder structure is different from the following, you may need to change the corresponding paths in config files.
mmdetection3d
βββ mmdet3d
βββ tools
βββ configs
βββ data
β βββ nuscenes
β β βββ maps
β β βββ samples
β β βββ sweeps
β β βββ v1.0-test
| | βββ v1.0-trainval
β βββ kitti
β β βββ ImageSets
β β βββ testing
β β β βββ calib
β β β βββ image_2
β β β βββ velodyne
β β βββ training
β β β βββ calib
β β β βββ image_2
β β β βββ label_2
β β β βββ velodyne
β βββ waymo
β β βββ waymo_format
β β β βββ training
β β β βββ validation
β β β βββ testing
β β β βββ gt.bin
β β βββ kitti_format
β β β βββ ImageSets
β βββ lyft
β β βββ v1.01-train
β β β βββ v1.01-train (train_data)
β β β βββ lidar (train_lidar)
β β β βββ images (train_images)
β β β βββ maps (train_maps)
β β βββ v1.01-test
β β β βββ v1.01-test (test_data)
β β β βββ lidar (test_lidar)
β β β βββ images (test_images)
β β β βββ maps (test_maps)
β β βββ train.txt
β β βββ val.txt
β β βββ test.txt
β β βββ sample_submission.csv
β βββ s3dis
β β βββ meta_data
β β βββ Stanford3dDataset_v1.2_Aligned_Version
β β βββ collect_indoor3d_data.py
β β βββ indoor3d_util.py
β β βββ README.md
β βββ scannet
β β βββ meta_data
β β βββ scans
β β βββ scans_test
β β βββ batch_load_scannet_data.py
β β βββ load_scannet_data.py
β β βββ scannet_utils.py
β β βββ README.md
β βββ sunrgbd
β β βββ OFFICIAL_SUNRGBD
β β βββ matlab
β β βββ sunrgbd_data.py
β β βββ sunrgbd_utils.py
β β βββ README.md
β βββ semantickitti
β β βββ sequences
β β β βββ 00
β β β β βββ labels
β β β β βββ velodyne
β β β βββ 01
β β β βββ ..
β β β βββ 22
Download and Data Preparation
KITTI
- Download KITTI 3D detection data HERE. Alternatively, you can download the dataset from OpenDataLab using MIM. The command scripts are the following:
# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset kitti
- Prepare KITTI data splits by running:
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets
# Download data split
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt
- Generate info files by running:
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
In an environment using slurm, users may run the following command instead:
sh tools/create_data.sh <partition> kitti
Tips:
Ready-made Annotations. We have also provided kitti data annotation files generated offline here. You could download them and place them under
data/kitti/
. However, if you want to useObjectSample
Augmentation in LiDAR-based detection methods, you should additionally generate groundtruth database files and annotations.python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti --only-gt-database
Waymo
Download Waymo open dataset V1.4.1 HERE and its data split HERE. Then put .tfrecord
files into corresponding folders in data/waymo/waymo_format/
and put the data split .txt
files into data/waymo/kitti_format/ImageSets
. Download ground truth .bin
file for validation set HERE and put it into data/waymo/waymo_format/
. A tip is that you can use gsutil
to download the large-scale dataset with commands. You can take this tool as an example for more details. Subsequently, prepare waymo data by running:
# TF_CPP_MIN_LOG_LEVEL=3 will disable all logging output from TensorFlow.
# The number of `--workers` depends on the maximum number of cores in your CPU.
TF_CPP_MIN_LOG_LEVEL=3 python tools/create_data.py waymo --root-path ./data/waymo --out-dir ./data/waymo --workers 128 --extra-tag waymo --version v1.4
Note that:
In case the preprocessing of Waymo dataset is slow or blocked, consider reducing the value of
--workers
. If this doesn't resolve the issue, you could set--workers
as 0 to avoid using multiprocess.If your local disk does not have enough space for saving converted data, you can change the
--out-dir
to anywhere else. Just remember to create folders and prepare data there in advance and link them back todata/waymo/kitti_format
after the data conversion.
Tips:
Ready-made Annotations. We have provided the annotation files generated offline here. However, the original Waymo data still needs to be converted to
kitti-format
data by yourself.Waymo-mini. If you just want to use a part of Waymo Dataset to verify some methods or debug quickly, you could use our provided Waymo-mini which only contains two segments in train split and one segment in val split from the original dataset. All the images, point clouds and annotations in this compressed file have been processed offline so that you can directly download and unzip it to
data/waymo/
:tar -xzvf waymo_mini.tar.gz -C ./data/waymo_mini
NuScenes
- Download nuScenes V1.0 full dataset data HERE. Alternatively, you can download the dataset from OpenDataLab using MIM. The downloading and unzipping command scripts are the following:
# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset nuscenes
- Prepare nuscenes data by running:
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
Tips:
- Ready-made Annotations. We have also provided NuScenes data annotation files generated offline here. You could download them and place them under
data/nuscenes/
. However, if you want to useObjectSample
Augmentation in LiDAR-based detection methods, you should additionally generate groundtruth database files and annotations.
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --only-gt-database
Lyft
Download Lyft 3D detection data HERE. Prepare Lyft data by running:
python tools/create_data.py lyft --root-path ./data/lyft --out-dir ./data/lyft --extra-tag lyft --version v1.01
python tools/dataset_converters/lyft_data_fixer.py --version v1.01 --root-folder ./data/lyft
Note that we follow the original folder names for clear organization. Please rename the raw folders as shown above. Also note that the second command serves the purpose of fixing a corrupted lidar data file. Please refer to the discussion for more details.
SemanticKITTI
- Download SemanticKITTI dataset HERE and unzip all zip files. Alternatively, you can download the dataset from OpenDataLab using MIM. The downloading and unzipping command scripts are the following:
# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset semantickitti
- Generate info files by running:
python ./tools/create_data.py semantickitti --root-path ./data/semantickitti --out-dir ./data/semantickitti --extra-tag semantickitti
Tips:
- Ready-made Annotations. We have also provided SemanticKITTI data annotation files generated offline here. You could download them and place them under
data/semantickitti/
.
S3DIS, ScanNet and SUN RGB-D
To prepare S3DIS data, please see its README.
To prepare ScanNet data, please see its README.
To prepare SUN RGB-D data, please see its README.
Tips: For S3DIS, ScanNet and SUN RGB-D datasets, we have also provided data annotation files generated offline here. You could download them and place them under data/${DATASET}/
. However, you also need to generate point cloud files and semantic/instance masks files (if it has) by yourself.
Customized Datasets
For using custom datasets, please refer to Customize Datasets.
Update data infos
If you have used v1.0.0rc1-v1.0.0rc4 mmdetection3d to create data infos before, and now you want to use the newest v1.1.0 mmdetection3d, you need to update the data infos file.
python tools/dataset_converters/update_infos_to_v2.py --dataset ${DATA_SET} --pkl-path ${PKL_PATH} --out-dir ${OUT_DIR}
--dataset
: Name of dataset.--pkl-path
: Specify the data infos pkl file path.--out-dir
: Output direction of the data infos pkl file.
Example:
python tools/dataset_converters/update_infos_to_v2.py --dataset kitti --pkl-path ./data/kitti/kitti_infos_trainval.pkl --out-dir ./data/kitti
Summary of annotation files
We provide ready-made annotation files we generated offline for reference. You can directly use these files for convenice.
Dataset | Train annotation file | Val annotation file | Test information file |
---|---|---|---|
KITTI | kitti_infos_train.pkl | kitti_infos_val.pkl | kitti_infos_test |
NuScenes | nuscenes_infos_train.pkl nuscenes_mini_infos_train.pkl | nuscenes_infos_val.pkl nuscenes_mini_infos_val.pkl | |
Waymo | waymo_infos_train.pkl | waymo_infos_val.pkl | waymo_infos_test.pkl waymo_infos_test_cam_only.pkl |
Waymo-mini | |||
SUN RGB-D | sunrgbd_infos_train.pkl | sunrgbd_infos_val.pkl | |
ScanNet | scannet_infos_train.pkl | scannet_infos_val.pkl | scannet_infos_test.pkl |
SemanticKitti | semantickitti_infos_train.pkl | semantickitti_infos_val.pkl | semantickitti_infos_test.pkl |