File size: 5,210 Bytes
2eaa1cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e07cc80
 
 
 
 
 
2eaa1cc
e07cc80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
dataset_info:
  features:
  - name: wikicaps_id
    dtype: int64
  - name: wikimedia_file
    dtype: string
  - name: caption
    dtype: string
  - name: tokens
    sequence: string
  - name: num_tok
    dtype: int64
  - name: sentence_spans
    sequence: string
  - name: sentence_languages
    sequence: string
  - name: num_sent
    dtype: int64
  - name: min_sent_len
    dtype: int64
  - name: max_sent_len
    dtype: int64
  - name: num_ne
    dtype: int64
  - name: ne_types
    sequence: string
  - name: ne_texts
    sequence: string
  - name: num_nouns
    dtype: int64
  - name: num_propn
    dtype: int64
  - name: num_conj
    dtype: int64
  - name: num_verb
    dtype: int64
  - name: num_sym
    dtype: int64
  - name: num_num
    dtype: int64
  - name: num_adp
    dtype: int64
  - name: num_adj
    dtype: int64
  - name: ratio_ne_tok
    dtype: float64
  - name: ratio_noun_tok
    dtype: float64
  - name: ratio_propn_tok
    dtype: float64
  - name: ratio_all_noun_tok
    dtype: float64
  - name: image_path
    dtype: string
  splits:
  - name: train
    num_bytes: 398344229
    num_examples: 295886
  - name: test
    num_bytes: 6727191
    num_examples: 5000
  download_size: 183918204
  dataset_size: 405071420
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: cc-by-sa-4.0
language:
- en
pretty_name: WISMIR 3
size_categories:
- 100K<n<1M
---
# WISMIR3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches

This repository holds the WISMIR3 dataset. For more information, please refer to the paper:

```bibtex
@inproceedings{
schneider2024wismir,
title={{WISMIR}3: A Multi-Modal Dataset to Challenge Text-Image Retrieval Approaches},
author={Florian Schneider and Chris Biemann},
booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)},
year={2024},
url={https://openreview.net/forum?id=Q93yqpfECQ}
}
```

## Columns

| ColumnId			| Description																| Datatype	|
|-------------------|---------------------------------------------------------------------------|-----------|
| wikicaps_id		| ID (line number) of the row in the original WikiCaps Dataset __img_en__ 	| int		|
| wikimedia_file    | Wikimedia File ID of the Image associated with the Caption				| str		|
| caption			| Caption of the Image														| str		|
| image_path		| Local path to the (downloaded) image										| str		|
| num_tok			| Number of Tokens in the caption											| int		|
| num_sent			| Number of Sentences in the caption										| int		|
| min_sent_len		| Minimum number of Tokens in the Sentences of the caption					| int		|
| max_sent_len		| Maximum number of Tokens in the Sentences of the caption					| int		|
| num_ne			| Number of Named Entities in the caption									| int		|
| num_nouns			| Number of Tokens with NOUN POS Tag 										| int		|
| num_propn			| Number of Tokens with PROPN POS Tag   									| int		|
| num_conj			| Number of Tokens with CONJ POS Tag 										| int		|
| num_verb			| Number of Tokens with VERB POS Tag 										| int		|
| num_sym			| Number of Tokens with SYM POS Tag 										| int		|
| num_num			| Number of Tokens with NUM POS Tag 										| int		|
| num_adp			| Number of Tokens with ADP POS Tag 										| int		|
| num_adj			| Number of Tokens with ADJ POS Tag 										| int		|
| ratio_ne_tok		| Ratio of tokens associated with Named Entities vs all Tokens  			| int		|
| ratio_noun_tok	| Ratio of tokens tagged as NOUN vs all Tokens  							| int		|
| ratio_propn_tok	| Ratio of tokens tagged as PROPN vs all Tokens  							| int		|
| ratio_all_noun_tok| Ratio of tokens tagged as PROPN or NOUN vs all Tokens     				| int		|
| fk_re_score		| Flesch-Kincaid Reading Ease score of the Caption ***						| int		|
| fk_gl_score		| Flesch-Kincaid Grade Level score of the Caption ***						| int		|
| dc_score			| Dale-Chall score of the Caption ***										| int		|
| ne_texts			| Surface form of detected NamedEntities									| List[str]	|
| ne_types			| Types of the detected NamedEntities (PER, LOC, GPE, etc.)					| List[str]	|

***
See [https://en.wikipedia.org/wiki/List_of_readability_tests_and_formulas](https://en.wikipedia.org/wiki/List_of_readability_tests_and_formulas) for more information about
Readability Scores

## WikiCaps publication
WISMIR3 is based on the WikiCaps dataset. For more information about the WikiCaps, see [https://www.cl.uni-heidelberg.de/statnlpgroup/wikicaps/](https://www.cl.uni-heidelberg.de/statnlpgroup/wikicaps/)

```bibtex
@inproceedings{schamoni-etal-2018-dataset,
    title = "A Dataset and Reranking Method for Multimodal {MT} of User-Generated Image Captions",
    author = "Schamoni, Shigehiko  and
      Hitschler, Julian  and
      Riezler, Stefan",
    editor = "Cherry, Colin  and
      Neubig, Graham",
    booktitle = "Proceedings of the 13th Conference of the Association for Machine Translation in the {A}mericas (Volume 1: Research Track)",
    month = mar,
    year = "2018",
    address = "Boston, MA",
    publisher = "Association for Machine Translation in the Americas",
    url = "https://aclanthology.org/W18-1814",
    pages = "140--153",
}

```