Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
INQUIRE-Rerank / README.md
evendrow's picture
Update README.md
b9a849f verified
|
raw
history blame
4.93 kB
metadata
license: cc-by-nc-4.0
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: image
      dtype: image
    - name: query
      dtype: string
    - name: relevant
      dtype: int64
    - name: clip_score
      dtype: float64
    - name: inat24_image_id
      dtype: int64
    - name: inat24_file_name
      dtype: string
    - name: supercategory
      dtype: string
    - name: category
      dtype: string
    - name: iconic_group
      dtype: string
    - name: inat24_species_id
      dtype: int64
    - name: inat24_species_name
      dtype: string
    - name: latitude
      dtype: float64
    - name: longitude
      dtype: float64
    - name: location_uncertainty
      dtype: float64
    - name: date
      dtype: string
    - name: license
      dtype: string
    - name: rights_holder
      dtype: string
  splits:
    - name: validation
      num_bytes: 293789663
      num_examples: 4000
    - name: test
      num_bytes: 1694429058
      num_examples: 16000
  download_size: 1879381267
  dataset_size: 1988218721
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

INQUIRE-Rerank

🌐 Website | πŸ“– Paper | GitHub

🎯 How do we empower scientific discovery in millions of nature photos?

INQUIRE is a text-to-image retrieval benchmark designed to challenge multimodal models with expert-level queries about the natural world. This dataset aims to emulate real world image retrieval and analysis problems faced by scientists working with large-scale image collections. Therefore, we hope that INQUIRE will both encourage and track advancements in the real scientific utility of AI systems.

image/jpeg

Dataset Details

The INQUIRE-Rerank is created from 250 expert-level queries. This task fixes an initial ranking of 100 images per query, obtained using CLIP ViT-H-14 zero-shot retrieval on the entire 5 million image iNat24 dataset. The challenge is to rerank all 100 images for each query with the goal of assigning high scores to the relevant images (there are potentially many relevant images for each query). This fixed starting point makes reranking evaluation consistent, and saves time from running the initial retrieval yourself. If you're interested in full-dataset retrieval, check out INQUIRE-Fullrank available from the github repo.

Loading the Dataset

To load the dataset using HugginFace datasets, you first need to pip install datasets, then run the following code:

from datasets import load_dataset

inquire_rerank = load_dataset("evendrow/INQUIRE-Rerank", split="validation") # or "test"

Running Baselines

We publish code to run baselines for reranking with CLIP models and reranking with Large Vision-Language Models. The code is available in our repository here: https://github.com/inquire-benchmark/INQUIRE.

Dataset Sources

INQUIRE and iNat24 were created by a group of researchers from the following affiliations: iNaturalist, the Massachusetts Institute of Technology, University College London, University of Edinburgh, and University of Massachusetts Amherst

  • Queries and Relevance Annotation: All image annotations were performed by a small set of individuals whose interest and familiarity with wildlife image collections enabled them to provide accurate labels for challenging queries.
  • Images and Species Labels: The images and species labels used in INQUIRE were sourced from data made publicly available by the citizen science platform iNaturalist in the years 2021, 2022, or 2023.

Licensing Information

We release INQUIRE under the CC BY-NC 4.0 license. We also include with each image its respective license information and rights holder. We note that all images in our dataset have suitable licenses for research use.

Additional Details

For additional details, check out our paper, INQUIRE: A Natural World Text-to-Image Retrieval Benchmark

Citation Information

@article{vendrow2024inquire,
  title={INQUIRE: A Natural World Text-to-Image Retrieval Benchmark}, 
  author={Vendrow, Edward and Pantazis, Omiros and Shepard, Alexander and Brostow, Gabriel and Jones, Kate E and Mac Aodha, Oisin and Beery, Sara and Van Horn, Grant},
  journal={NeurIPS},
  year={2024},
}