dataset_info:
features:
- name: image
dtype: image
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 6174743543.802
num_examples: 1142
download_size: 6408762030
dataset_size: 6174743543.802
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
GeoBench (GeoVista Bench)
GeoBench is a collection of real-world panoramas with rich metadata for evaluating geolocation models. Each sample corresponds to one panorama identified by its uid and includes both the original high-resolution imagery and a lightweight preview for rapid inspection.
Dataset Structure
id: unique identifier (same asuidfrom the original data).raw_image_path: relative path (within this repo) to the source panorama underraw_image/<uid>/.preview: compressed JPEG preview (<=1M pixels) underpreview_image/<uid>/. This is used by HF Dataset Viewer.metadata: JSON object storing capture timestamp, location, pano_id, city, and other attributes. Downstream users can parse it to obtain lat/lng, city names, multi-level location tags, etc.data_type: string describing the imagery type. If absent in metadata it defaults topanorama.
All samples are stored in a Hugging Face-compatible parquet file at data/<split>/data-00000-of-00001.parquet, with additional metadata in dataset_info.json.
Working with GeoBench
- Clone/download this folder (or pull it via
huggingface_hub). - Load the parquet file using Python:
from datasets import load_dataset ds = load_dataset('path/to/this/folder', split='train') sample = ds[0] `` `sample["preview"]` loads directly as a PIL image; `sample["raw_image_path"]` points to the higher-quality file if needed. - Use the metadata to drive evaluation logic, e.g., compute city-level accuracy, filter by
data_type, or inspect specific regions.
Notes
- Raw panoramas retain original filenames to preserve provenance.
- Preview images are resized to reduce storage costs while remaining representative of the scene.
- Ensure you comply with the dataset’s license (
dataset_info.json) when sharing or modifying derived works.
Related Resources
• GeoVista model (RL-trained agentic VLM used in the paper):
https://huggingface.co/LibraTree/GeoVista • GeoVista-Bench (previewable variant): A companion dataset with resized JPEG previews intended to make image preview easier in the Hugging Face dataset viewer: https://huggingface.co/datasets/LibraTree/GeoVistaBench (Same underlying benchmark; different packaging / image formats.) • Paper page on Hugging Face: https://huggingface.co/papers/2511.15705
Citation
@misc{wang2025geovistawebaugmentedagenticvisual,
title = {GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization},
author = {Yikun Wang and Zuyan Liu and Ziyi Wang and Pengfei Liu and Han Hu and Yongming Rao},
year = {2025},
eprint = {2511.15705},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.15705},
}