File size: 3,260 Bytes
8a083cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b59f13
 
d13d7b7
2e724f5
5b59f13
 
 
 
 
 
 
 
 
 
 
 
8a083cc
ac1a836
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  features:
  - name: idx
    dtype: string
  - name: ground_depth
    dtype: image
  - name: ground_semantics
    dtype: image
  - name: ground_rgb
    dtype: image
  - name: satellite_rgb
    dtype: image
  - name: satellite_semantics
    dtype: image
  - name: gt
    dtype: image
  splits:
  - name: all
    num_bytes: 2862184641.896
    num_examples: 4463
  download_size: 2807769507
  dataset_size: 2862184641.896
---
# Geospecific View Generation - Geometry-Context Aware High-resolution Ground View Inference from Satellite Views

[**🌐 Homepage**](https://gdaosu.github.io/geocontext/) |  [**📖 arXiv**](https://arxiv.org/abs/2407.08061)


## Introduction
Predicting realistic ground views from satellite imagery in urban scenes is a challenging task due to the significant view gaps between satellite and ground-view images. We propose a novel pipeline to tackle this challenge, by generating geospecifc views that maximally respect the weak geometry and texture from multi-view satellite images. Different from existing approaches that hallucinate images from cues such as partial semantics or geometry from overhead satellite images, our method directly predicts ground-view images at geolocation by using a comprehensive set of information from the satellite image, resulting in ground-level images with a resolution boost at a factor of ten or more. We leverage a novel building refinement method to reduce geometric distortions in satellite data at ground level, which ensures the creation of accurate conditions for view synthesis using diffusion networks. Moreover, we proposed a novel geospecific prior, which prompts distribution learning of diffusion models to respect image samples that are closer to the geolocation of the predicted images. We demonstrate our pipeline is the first to generate close-to-real and geospecific ground views merely based on satellite images.

## Description
The GeoContext-v1 contains 4463 pairs of satellite-ground data. Including:
  * <strong>satellite_rgb</strong>: [512x512x3] satellite rgb texture, generated by orthophoto projection from satellite textured mesh.
  * <strong>satellite_semantics</strong>: [512,512,3] satellite semantics containing two class, where "ground" as [120,120,70], "building" as [180,120,120]
  * <strong>ground_rgb</strong>: [256x512x3] ground-view satellite rgb texture, generated by panoramic projection from satellite textured mesh in Blender.
  * <strong>ground_semantics</strong>: [256x512x3] ground-view satellite semantics containing three class, where "ground" "building" same as "satellite_semantics" and "sky" as [6,230,230]
  * <strong>ground_depth</strong>: [256x512x1] ground-view satellite depth, generated in same way as "ground_rgb"
  * <strong>gt</strong>: [256x512x3] ground truth ground-view rgb, downloaded from google street 360.

## Citation

**BibTex:**
```bibtex
@misc{xu2024geospecificviewgeneration,
      title={Geospecific View Generation -- Geometry-Context Aware High-resolution Ground View Inference from Satellite Views}, 
      author={Ningli Xu and Rongjun Qin},
      year={2024},
      eprint={2407.08061},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.08061}, 
}
```