File size: 7,043 Bytes
6db0fdd 786b282 fab13ea 6db0fdd af11460 6db0fdd af11460 6db0fdd af11460 6db0fdd af11460 6db0fdd fb43a8b 51abf6f fb43a8b 6db0fdd 786b282 eeb9a77 6db0fdd af11460 6db0fdd af11460 ffe38e4 4cd1bc1 ffe38e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- OpenSatMap
- Satellite
task_categories:
- image-segmentation
---
# OpenSatMap Dataset Card
<p align="center">
<img src="image/README/1732438503023.png" alt="1732438503023">
</p>
## Description
The dataset contains 3,787 high-resolution satellite images with fine-grained annotations, covering diverse geographic locations and popular driving datasets. It can be used for large-scale map construction and downstream tasks like autonomous driving. The images are collected from Google Maps at level 19 resolution (0.3m/pixel) and level 20 resolution (0.15m/pixel), we denote them as OpenSatMap19 and OpenSatMap20, respectively. For OpenSatMap19, the images are collected from 8 cities in China, including Beijing, Shanghai, Guangzhou, ShenZhen, Chengdu, Xi'an, Tianjin, and Shenyang. There are 1806 images in OpenSatMap19. For OpenSatMap20, the images are collected from 18 countries, more than 50 cities all over the world. There are 1981 images in OpenSatMap20. The figure below shows the sampling areas of the images in OpenSatMap.
<p align="center">
<img src="image/README/1732438352223.png" alt="1732438352223">
</p>
For each image, we provide instance-level annotations and eight attributes for road structures, including lanes lines, curb and virtual lines. The instances in OpenSatMap images are annotated by experts in remote sensing and computer vision. We will continue to update the dataset, to grow in size and scope to reflect evolving real-world conditions.
## Image Source and Usage License
The OpenSatMap images are collected from Google Maps. The dataset will be licensed under a Creative Commons CC-BY-NC-SA 4.0 license and the usage of the images must respect the Google Maps Terms of Service.
## Line Category and Attribute
We use vectorized polylines to represent a line instance. We first categorize all lines into three categories: curb, lane line, and virtual line. A curb is the boundary of a road. Lane lines are those visible lines forming the lanes. A virtual line means that there is no lane line or curb here, but logically there should be a boundary to form a full lane. Please refer to the figure below for examples of these three categories.
For each line instance, we provide eight attributes: **color, line type,number of lines, function, bidirection, boundary, shaded, clearness**.
Specifically, they are:
- Color: The color of the line. It can be white, yellow, others or none.
- Line type: The type of the line. It can be solid, thick solid, dashed, short dashed dotted, others or none.
- Number of lines: The number of the line. It can be single, double, others or none.
- Function: The function of the line. It can be Chevron markings, no parking, deceleration line, bus lane, tidal line, parking space, vehicle staging area, guide line, changable line, lane-borrowing line, others or none.
- Bidirection: Whether the line is bidirectional. It can be true or false.
- Boundary: Whether the line is a boundary. It can be true or false.
- Shaded: The degree of occlusion. It can be no, minor or major.
- Clearness: The clearness of the line. It can be clear or fuzzy.
Note that there is no man-made visible line on curbs and virtual lines, so we annotate their colors, line types, numbers of lines, and functions as none.
<p align="center">
<img src="image/README/1732438442673.png" alt="1732438442673">
</p>
## Annotation Format
The annotations are stored in JSON format. Each image is annotated with "image_width", "image_height", and a list of "lines" where the elements are line instances. Each line is annotated with "category", "points", "color", "line_type", "line_num", "function", "bidirection", "boundary", "shaded", and "clearness".
```
{"img_name": {
"image_width": int,
"image_height": int,
"lines": [
{
"category": str,
"points": [
[float, float],
[float, float],
[float, float],
...
],
"color": str,
"line_type": str,
"line_num": str,
"function": str,
"bidirection": bool,
"boundary": bool,
"shaded": str,
"clearness": bool
},
{
"category": str,
"points": [
[float, float],
[float, float],
[float, float],
...
],
"color": str,
"line_type": str,
"line_num": str,
"function": str,
"bidirection": bool,
"boundary": bool,
"shaded": str,
"clearness": bool
},
...
]
}
}
```
## Meta data
The meta data of GPS coordinates and image acquisition time are also provided. The meta data is stored in a JSON file. Image names are keys and values are the tiles we used in each images.
Please refer to [get_google_maps_image](https://github.com/bjzhb666/get_google_maps_image) for more details.
We can use the meta data to calculate the center of a picture and the code will be released in [Code (We will release all the codes as soon as possible)](https://github.com/OpenSatMap/OpenSatMap-offical).
```
{
"img_name": [
{
"centerGPS": [float, float],
"centerWorld": [float, float],
"filename": str
},
{
"centerGPS": [float, float],
"centerWorld": [float, float],
"filename": str
},
...
]
...
}
```
## Paper or resources for more information:
[Paper](https://arxiv.org/abs/2410.23278),
[Project](https://opensatmap.github.io/),
[Code (We will release all the codes as soon as possible)](https://github.com/OpenSatMap/OpenSatMap-offical)
## Intended use
### Task 1: Instance-level Line Detection
The aim of this task is to extract road structures from satellite images at the instance level. For each instance, we use polylines as the vectorized representation and pixel-level masks as the rasterized representation.
<p align="center">
<img src="image/README/1732438334686.png" alt="1732438334686">
</p>
### Task 2: Satellite-enhanced Online Map Construction
We use satellite images to enhance online map construction for autonomous driving. Inputs are carema images of an autonomous vehicle and satellite images of the same area and outputs are vectorized map elements around the vehicle.
<p align="center">
<img src="image/README/1732438311510.png" alt="1732438311510">
</p>
**Alignment with driving benchmark (nuScenes)**
<p align="center">
<img src="image/README/1732438587349.png" alt="1732438587349">
</p>
## Citation
```
@article{zhao2024opensatmap,
title={OpenSatMap: A Fine-grained High-resolution Satellite Dataset for Large-scale Map Construction},
author={Zhao, Hongbo and Fan, Lue and Chen, Yuntao and Wang, Haochen and Jin, Xiaojuan and Zhang, Yixin and Meng, Gaofeng and Zhang, Zhaoxiang},
journal={arXiv preprint arXiv:2410.23278},
year={2024}
}
``` |