Datasets:
license: cc-by-4.0
task_categories:
- image-to-image
- image-feature-extraction
tags:
- mars
- crater
- retrieval
- remote-sensing
- planetary-science
- vision-transformer
pretty_name: CraterBench-R
size_categories:
- 10K<n<100K
CraterBench-R
CraterBench-R is an instance-level crater retrieval benchmark built from Mars CTX imagery.
This repository contains the released benchmark images, official split files, relevance metadata, and a minimal evaluation example.
Summary
25,000crater identities in the benchmark gallery50,000gallery images with two canonical context crops per crater1,000held-out query crater identities5,000manually verified query images with five views per query crater- official
trainandtestsplit manifests with relative paths only
Download
# Download the repository
pip install huggingface_hub
huggingface-cli download jfang/CraterBench-R --repo-type dataset --local-dir CraterBench-R
cd CraterBench-R
# Extract images
unzip images.zip
After extraction the directory should contain images/gallery/ (50,000 JPEGs) and images/query/ (5,000 JPEGs).
Repository Layout
images.zip: all benchmark images (gallery + query) in a single archivesplits/test.json: official benchmark split with full gallery plus query setsplits/train.json: train gallery with the full test relevance closure removedmetadata/query_relevance.json: raw co-visible crater IDs and gallery-filtered relevancemetadata/stats.json: release summary statisticsmetadata/source/retrieval_ground_truth_raw.csv: raw query relevance CSV for referenceexamples/eval_timm_global.py: minimal global-descriptor example for anytimmimage modelrequirements.txt: lightweight requirements for the example script
Split Semantics
splits/test.json is the official benchmark split and includes both:
- the full gallery
- the query set evaluated against that gallery
The ground_truth field maps each query crater ID to gallery-present relevant crater IDs.
The raw query co-visibility information is preserved separately in metadata/query_relevance.json:
co_visible_ids_all: all raw co-visible crater IDs from the source annotationrelevant_gallery_ids: the subset that is present in the released gallery
splits/train.json is intended for supervised or metric-learning experiments. It excludes the full test relevance closure, not just the 1,000 direct query crater IDs.
Official Counts
- test gallery:
25,000crater IDs /50,000images - test queries:
1,000crater IDs /5,000images - train gallery:
23,941crater IDs /47,882images - raw multi-ID query crater IDs:
428 - gallery-present multi-ID query crater IDs:
59
Quick Start
pip install -r requirements.txt
unzip images.zip # if not already extracted
python examples/eval_timm_global.py \
--data-root . \
--split test \
--model vit_small_patch16_224.dino \
--pool cls \
--batch-size 64 \
--device cuda
The example script:
- loads
splits/test.json - creates a pretrained
timmmodel - extracts one feature vector per image
- performs cosine retrieval against the released gallery
- reports Recall@1/5/10, mAP, and MRR
It is intentionally simple and meant as a working baseline rather than the fastest possible evaluator.
Manifest Format
Each split JSON has the form:
{
"split_name": "train or test",
"version": "release_v1",
"gallery_images": [
{
"image_id": "02-1-002611_2x",
"path": "images/gallery/02-1-002611_2x.jpg",
"crater_id": "02-1-002611",
"view_type": "2x"
}
],
"query_images": [
{
"image_id": "02-1-002927_view_1",
"query_id": "02-1-002927",
"path": "images/query/02-1-002927_view_1.jpg",
"crater_id": "02-1-002927",
"view": 1,
"manual_verified": true
}
],
"ground_truth": {
"02-1-002927": ["02-1-002927"]
}
}
Validation
The repository includes one small validation utility:
scripts/validate_release.py: checks split consistency, relative paths, and train/test separation
To validate the package locally:
python scripts/validate_release.py
Notes
- Image paths in manifests are relative and portable.
splits/test.jsonis the official benchmark entrypoint.splits/train.jsonis intended for training or fine-tuning experiments.