--- license: cc-by-nc-4.0 size_categories: - 10K _🎯 **How do we empower scientific discovery in millions of nature photos?**_ INQUIRE is a text-to-image retrieval benchmark designed to challenge multimodal models with expert-level queries about the natural world. This dataset aims to emulate real world image retrieval and analysis problems faced by scientists working with large-scale image collections. Therefore, we hope that INQUIRE will both encourage and track advancements in the real scientific utility of AI systems. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630b1e44cd26ad7f60d490e2/CIFPqSwwkSSZo0zMoQOCr.jpeg) **Dataset Details** The **INQUIRE-Rerank** is created from 250 expert-level queries. This task fixes an initial ranking of 100 images per query, obtained using CLIP ViT-H-14 zero-shot retrieval on the entire 5 million image iNat24 dataset. The challenge is to rerank all 100 images for each query with the goal of assigning high scores to the relevant images (there are potentially many relevant images for each query). This fixed starting point makes reranking evaluation consistent, and saves time from running the initial retrieval yourself. If you're interested in full-dataset retrieval, check out **INQUIRE-Fullrank** available from the github repo. **Loading the Dataset** To load the dataset using HugginFace `datasets`, you first need to `pip install datasets`, then run the following code: ``` from datasets import load_dataset inquire_rerank = load_dataset("evendrow/INQUIRE-Rerank", split="validation") # or "test" ``` **Running Baselines** We publish code to run baselines for reranking with CLIP models and reranking with Large Vision-Language Models. The code is available in our repository here: [https://github.com/inquire-benchmark/INQUIRE](https://github.com/inquire-benchmark/INQUIRE). **Dataset Sources** INQUIRE and iNat24 were created by a group of researchers from the following affiliations: iNaturalist, the Massachusetts Institute of Technology, University College London, University of Edinburgh, and University of Massachusetts Amherst - **Queries and Relevance Annotation**: All image annotations were performed by a small set of individuals whose interest and familiarity with wildlife image collections enabled them to provide accurate labels for challenging queries. - **Images and Species Labels**: The images and species labels used in INQUIRE were sourced from data made publicly available by the citizen science platform iNaturalist in the years 2021, 2022, or 2023. **Licensing Information** We release INQUIRE under the **CC BY-NC 4.0** license. We also include with each image its respective license information and rights holder. We note that all images in our dataset have suitable licenses for research use. **Additional Details** For additional details, check out our paper, [INQUIRE: A Natural World Text-to-Image Retrieval Benchmark](https://arxiv.org/abs/2411.02537) **Citation Information** ``` @article{vendrow2024inquire, title={INQUIRE: A Natural World Text-to-Image Retrieval Benchmark}, author={Vendrow, Edward and Pantazis, Omiros and Shepard, Alexander and Brostow, Gabriel and Jones, Kate E and Mac Aodha, Oisin and Beery, Sara and Van Horn, Grant}, journal={NeurIPS}, year={2024}, } ```