|
---
|
|
license: cc-by-nc-3.0
|
|
---
|
|
|
|
## MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding
|
|
|
|
### This is the version 2.0 for WACV 2025 LLVM-AD Challenge.
|
|
|
|
Tencent, University of Illinois at Urbana-Champaign, Purdue University, University of Virginia
|
|
|
|
### Dataset Structure
|
|
```
|
|
----data
|
|
|----images
|
|
| |----FR1
|
|
| | |----photo_forward.jpg
|
|
| | |----photo_lef_back.jpg
|
|
| | |----photo_rig_back.jpg
|
|
| | |----point_cloud_bev.jpg
|
|
| |----FR2
|
|
| | |----photo_forward.jpg
|
|
| | |----photo_lef_back.jpg
|
|
| | |----photo_rig_back.jpg
|
|
| | |----point_cloud_bev.jpg
|
|
| ...
|
|
|----train_v2.json
|
|
|----val_v2.json
|
|
|----test_v2.json
|
|
```
|
|
|
|
### Input
|
|
|
|
The input data includes the forward view and back left/right view of the scene.
|
|
|
|
![Forward](./data/images/FR1/photo_forward.jpg) ![Left_back](./data/images/FR1/photo_lef_back.jpg)
|
|
![Right_back](./data/images/FR1/photo_rig_back.jpg)
|
|
|
|
|
|
And the projected BEV image of the 3D point cloud.
|
|
![BEV](./data/images/FR1/point_cloud_bev.jpg)
|
|
|
|
All of our data are following the standard to produce HD map. Note, the participants do not have to use all inputs in the challenge. The HD map annotation will not be released in this version.
|
|
|
|
### Task
|
|
|
|
1. What kind of road scene is it in the images? (SCN)
|
|
2. What is the point cloud data quality in current road area of this image? (QLT)
|
|
3. Is there any road cross, intersection or lane change zone in the main road? (INT)
|
|
4. How many lanes in current road? (LAN)*
|
|
5. Describe the lane attributes on the current road. (DES)*
|
|
6. Describe all aspects of the current driving scene in detail. (CAP)*
|
|
7. Identify any unusual objects visible in the image. (OBJ)*
|
|
8. Predict the lane change behavior of the ego vehicle. (MOVE)*
|
|
9. Predict the speed behavior of the ego vehicle. (SPEED)*
|
|
|
|
*Some questions may not occur for all sample cases.
|
|
|
|
### Reference
|
|
|
|
When using this resource, please cite:
|
|
|
|
```
|
|
@inproceedings{cao2024maplm,
|
|
title={MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding},
|
|
author={Cao, Xu and Zhou, Tong and Ma, Yunsheng and Ye, Wenqian and Cui, Can and Tang, Kun and Cao, Zhipeng and Liang, Kaizhao and Wang, Ziran and Rehg, James M and others},
|
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
|
pages={21819--21830},
|
|
year={2024}
|
|
}
|
|
```
|
|
|
|
```
|
|
@inproceedings{tang2023thma,
|
|
title={Thma: Tencent hd map ai system for creating hd map annotations},
|
|
author={Tang, Kun and Cao, Xu and Cao, Zhipeng and Zhou, Tong and Li, Erlong and Liu, Ao and Zou, Shengtao and Liu, Chang and Mei, Shuqi and Sizikova, Elena and others},
|
|
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
|
|
volume={37},
|
|
number={13},
|
|
pages={15585--15593},
|
|
year={2023}
|
|
}
|
|
```
|
|
|
|
|
|
|