LOTEAT commited on
Commit
93c6961
·
1 Parent(s): 5868c0d

Upload dataset (auto)

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -1,7 +1,18 @@
1
- <!--
2
- * @Author: LOTEAT
3
- * @Date: 2025-11-18 11:15:49
4
- -->
 
 
 
 
 
 
 
 
 
 
 
5
  # VVSim Dataset
6
 
7
  **VVSim** is a large-scale dataset created for aerial–ground cooperative perception (AGCP). It integrates synchronized multimodal sensing data and state information collected simultaneously from vehicles and UAVs. The dataset contains **61K** fully annotated frames that cover **19** interaction scenarios (e.g., cut-in and lane change), along with **5** weather conditions (e.g., sunny, foggy, rainy, cloudy, snowy) and **11** scene types such as city, town, university, highway, and mountain environments. Beyond these frames, VVSim provides **255K** LiDAR sweeps and **3.5M** images (e.g., **1.2M** RGB images, **1.2M** semantic segmentation images, and **1.1M** depth images), accompanied by detailed annotations for 2D and 3D bounding boxes, object trajectories, and agent states.
 
1
+ ---
2
+ name: VVSim
3
+ tags:
4
+ - aerial-ground cooperative perception
5
+ - autonomous driving
6
+ - trajectory prediction
7
+ license: CC-BY-4.0
8
+ task_categories:
9
+ - perception
10
+ - planning
11
+ - control
12
+ task_ids:
13
+ - object detection
14
+ ---
15
+
16
  # VVSim Dataset
17
 
18
  **VVSim** is a large-scale dataset created for aerial–ground cooperative perception (AGCP). It integrates synchronized multimodal sensing data and state information collected simultaneously from vehicles and UAVs. The dataset contains **61K** fully annotated frames that cover **19** interaction scenarios (e.g., cut-in and lane change), along with **5** weather conditions (e.g., sunny, foggy, rainy, cloudy, snowy) and **11** scene types such as city, town, university, highway, and mountain environments. Beyond these frames, VVSim provides **255K** LiDAR sweeps and **3.5M** images (e.g., **1.2M** RGB images, **1.2M** semantic segmentation images, and **1.1M** depth images), accompanied by detailed annotations for 2D and 3D bounding boxes, object trajectories, and agent states.