--- dataset_info: features: - name: query dtype: string - name: image dtype: image splits: - name: test num_bytes: 744837 num_examples: 3 download_size: 728880 dataset_size: 744837 configs: - config_name: default data_files: - split: test path: data/test-* license: mit language: - en pretty_name: Document Visual Retrieval Test (internal) --- # Model Card: Document Visual Retrieval Test (internal) ## Dataset Overview This dataset is designed to evaluate the performance of visual retrievers by testing their ability to match a query to a relevant image. Each of the three examples in this dataset contains a text query and an associated image, which is a scanned page from the foundational "Attention is All You Need" paper. The purpose of this dataset is to facilitate the evaluation of visual retrievers, where the retrieval model should accurately link each query with its corresponding page. Copied from [vidore/colpali-v1.2-merged](https://huggingface.co/datasets/vidore/document-visual-retrieval-test/tree/main/data) ## Dataset Details - **Number of Examples**: 3 - **Image Type**: Scanned pages from the "Attention is All You Need" paper - **Purpose**: Testing the retrieval accuracy of visual retrievers on academic paper pages - **Usage**: The dataset is ideal for testing retrieval models, especially those focusing on cross-modal retrieval where a text query matches a specific visual page. ## Intended Use This dataset is intended for use in assessing and benchmarking the performance of visual retrieval models. Specifically, a high-performing model should be able to: - Understand the textual context provided in the query. - Retrieve the correct image from a set of images that corresponds to that specific query. ### Example Queries The queries reflect key sections of the "Attention is All You Need" paper and require the retriever to connect the query to the correct page image containing the relevant information. ## Performance Evaluation To assess the performance of a visual retriever with this dataset, standard metrics such as nDCG@k (Normalized Discounted Cumulative Gain), Recall@K, and MRR (Mean Reciprocal Rank) are recommended. The dataset is small and meant as a preliminary benchmark to test if a retriever can reliably match highly specific text queries to their associated visual representations. ### Baseline Performance A basic text-to-image matching model should aim for a Recall@1 score of 100% given the straightforward nature of this task and the limited dataset size. ## Ethical Considerations This dataset uses publicly available content from an academic paper (the "Attention is All You Need" paper). Users should ensure appropriate use in line with fair-use guidelines for academic and research purposes. No private or sensitive information is contained in this dataset.