Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
yonigozlan HF staff commited on
Commit
b200672
1 Parent(s): c845f2c

add dataset

Browse files
Files changed (2) hide show
  1. README.md +60 -3
  2. data/test-00000-of-00001.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: query
5
+ dtype: string
6
+ - name: image
7
+ dtype: image
8
+ splits:
9
+ - name: test
10
+ num_bytes: 744837
11
+ num_examples: 3
12
+ download_size: 728880
13
+ dataset_size: 744837
14
+ configs:
15
+ - config_name: default
16
+ data_files:
17
+ - split: test
18
+ path: data/test-*
19
+ license: mit
20
+ language:
21
+ - en
22
+ pretty_name: Document Visual Retrieval Test (internal)
23
+ ---
24
+
25
+ # Model Card: Document Visual Retrieval Test (internal)
26
+
27
+ ## Dataset Overview
28
+
29
+ This dataset is designed to evaluate the performance of visual retrievers by testing their ability to match a query to a relevant image. Each of the three examples in this dataset contains a text query and an associated image, which is a scanned page from the foundational "Attention is All You Need" paper. The purpose of this dataset is to facilitate the evaluation of visual retrievers, where the retrieval model should accurately link each query with its corresponding page.
30
+
31
+ Copied from [vidore/colpali-v1.2-merged](https://huggingface.co/datasets/vidore/document-visual-retrieval-test/tree/main/data)
32
+
33
+ ## Dataset Details
34
+
35
+ - **Number of Examples**: 3
36
+ - **Image Type**: Scanned pages from the "Attention is All You Need" paper
37
+ - **Purpose**: Testing the retrieval accuracy of visual retrievers on academic paper pages
38
+ - **Usage**: The dataset is ideal for testing retrieval models, especially those focusing on cross-modal retrieval where a text query matches a specific visual page.
39
+
40
+ ## Intended Use
41
+
42
+ This dataset is intended for use in assessing and benchmarking the performance of visual retrieval models. Specifically, a high-performing model should be able to:
43
+ - Understand the textual context provided in the query.
44
+ - Retrieve the correct image from a set of images that corresponds to that specific query.
45
+
46
+ ### Example Queries
47
+
48
+ The queries reflect key sections of the "Attention is All You Need" paper and require the retriever to connect the query to the correct page image containing the relevant information.
49
+
50
+ ## Performance Evaluation
51
+
52
+ To assess the performance of a visual retriever with this dataset, standard metrics such as nDCG@k (Normalized Discounted Cumulative Gain), Recall@K, and MRR (Mean Reciprocal Rank) are recommended. The dataset is small and meant as a preliminary benchmark to test if a retriever can reliably match highly specific text queries to their associated visual representations.
53
+
54
+ ### Baseline Performance
55
+
56
+ A basic text-to-image matching model should aim for a Recall@1 score of 100% given the straightforward nature of this task and the limited dataset size.
57
+
58
+ ## Ethical Considerations
59
+
60
+ This dataset uses publicly available content from an academic paper (the "Attention is All You Need" paper). Users should ensure appropriate use in line with fair-use guidelines for academic and research purposes. No private or sensitive information is contained in this dataset.
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a9c63a7ba2ab2e94227c185e632789319c32400e08b1d7d6f13ebe63459bc1e
3
+ size 728880