File size: 5,440 Bytes
4be9e63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
568e9cd
 
 
6eb66f0
 
 
 
4be9e63
 
568e9cd
7fcd3a3
 
 
 
 
 
28b89c3
 
dd9ec92
7fcd3a3
 
 
 
5471905
7fcd3a3
 
 
 
 
1a0ab0e
0fc2599
7fcd3a3
 
 
 
 
690b6c2
7fcd3a3
72baf48
690b6c2
 
72baf48
7fcd3a3
690b6c2
7fcd3a3
690b6c2
 
7fcd3a3
8bcf5d9
7fcd3a3
337b60f
 
 
690b6c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
337b60f
 
40c9bdf
690b6c2
 
 
 
 
 
 
 
 
 
 
40c9bdf
690b6c2
 
 
337b60f
690b6c2
 
 
 
 
 
40c9bdf
690b6c2
 
 
 
 
 
 
 
 
 
40c9bdf
690b6c2
 
 
 
231e548
690b6c2
 
 
 
 
 
 
 
 
 
a5e53a8
 
 
 
 
690b6c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bf45e1
 
 
 
 
 
 
 
690b6c2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
annotations_creators:
- expert-generated
- crowdsourced
license: cc-by-4.0
task_categories:
- image-to-text
- text-to-image
- object-detection
language:
- en
size_categories:
- 1K<n<10K
tags:
- iiw
- imageinwords
- image-descriptions
- image-captions
- detailed-descriptions
- hyper-detailed-descriptions
- object-descriptions
- object-detection
- object-labels
- image-text
- t2i
- i2t
- dataset
pretty_name: ImageInWords
multilinguality:
- monolingual
---

<h2>ImageInWords: Unlocking Hyper-Detailed Image Descriptions</h2> 

Please visit the [webpage](https://google.github.io/imageinwords) for all the information about the IIW project, data downloads, visualizations, and much more.

<img src="https://github.com/google/imageinwords/blob/main/static/images/Abstract/1_white_background.png?raw=true">
<img src="https://github.com/google/imageinwords/blob/main/static/images/Abstract/2_white_background.png?raw=true">

Please reach out to iiw-dataset@google.com for thoughts/feedback/questions/collaborations.

<h3>&#129303;Hugging Face&#129303;</h3>

<li><a href="https://huggingface.co/datasets/google/imageinwords">IIW-Benchmark Eval Dataset</a></li>

```python
from datasets import load_dataset

# `name` can be one of: IIW-400, DCI_Test, DOCCI_Test, CM_3600, LocNar_Eval
# refer: https://github.com/google/imageinwords/tree/main/datasets
dataset = load_dataset('google/imageinwords', token=None, name="IIW-400", trust_remote_code=True)
```

<li><a href="https://huggingface.co/spaces/google/imageinwords-explorer">Dataset-Explorer</a></li>


## Dataset Description

- **Paper:** [arXiv](https://arxiv.org/abs/2405.02793)
- **Homepage:** https://google.github.io/imageinwords/
- **Point of Contact:** iiw-dataset@google.com
- **Dataset Explorer:** [ImageInWords-Explorer](https://huggingface.co/spaces/google/imageinwords-explorer)

### Dataset Summary

ImageInWords (IIW), a carefully designed human-in-the-loop annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process.
We validate the framework through evaluations focused on the quality of the dataset and its utility for fine-tuning with considerations for readability, comprehensiveness, specificity, hallucinations, and human-likeness.

This Data Card describes **IIW-Benchmark: Eval Datasets**, a mixture of human annotated and machine generated data intended to help create and capture rich, hyper-detailed image descriptions. 

IIW dataset has two parts: human annotations and model outputs. The main purposes of this dataset are:
  1) to provide samples from SoTA human authored outputs to promote discussion on annotation guidelines to further improve the quality
  2) to provide human SxS results and model outputs to promote development of automatic metrics to mimic human SxS judgements.

### Supported Tasks

Text-to-Image, Image-to-Text, Object Detection

### Languages

English

## Dataset Structure

### Data Instances

### Data Fields

For details on the datasets and output keys, please refer to our [GitHub data](https://github.com/google/imageinwords/tree/main/datasets) page inside the individual folders.

IIW-400:
 - `image/key`
 - `image/url`
 - `IIW`: Human generated image description
 - `IIW-P5B`: Machine generated image description
 - `iiw-human-sxs-gpt4v` and `iiw-human-sxs-iiw-p5b`: human SxS metrics
   - metrics/Comprehensiveness
   - metrics/Specificity
   - metrics/Hallucination
   - metrics/First few line(s) as tldr
   - metrics/Human Like

DCI_Test: 
 - `image`
 - `image/url`
 - `ex_id`
 - `IIW`: Human authored image description
 - `metrics/Comprehensiveness`
 - `metrics/Specificity`
 - `metrics/Hallucination`
 - `metrics/First few line(s) as tldr`
 - `metrics/Human Like`

DOCCI_Test:
 - `image`
 - `image/thumbnail_url`
 - `IIW`: Human generated image description
 - `DOCCI`: Image description from DOCCI
 - `metrics/Comprehensiveness`
 - `metrics/Specificity`
 - `metrics/Hallucination`
 - `metrics/First few line(s) as tldr`
 - `metrics/Human Like`

LocNar_Eval:
 - `image/key`
 - `image/url`
 - `IIW-P5B`: Machine generated image description

CM_3600:
 - `image/key`
 - `image/url`
 - `IIW-P5B`: Machine generated image description

Please note that all fields are string.

### Data Splits

Dataset | Size
---| ---: 
IIW-400     | 400
DCI_Test | 112
DOCCI_Test    | 100
LocNar_Eval    | 1000
CM_3600    | 1000

### Annotations

#### Annotation process

Some text descriptions were written by human annotators and some were generated by machine models.
The metrics are all from human SxS.

### Personal and Sensitive Information

The images that were used for the descriptions and the machine generated text descriptions are checked (by algorithmic methods and manual inspection) for S/PII, pornographic content, and violence and any we found may contain such information have been filtered. 
We asked that human annotators use an objective and respectful language for the image descriptions.

### Licensing Information

CC BY 4.0

### Citation Information

```
@misc{garg2024imageinwords,
      title={ImageInWords: Unlocking Hyper-Detailed Image Descriptions}, 
      author={Roopal Garg and Andrea Burns and Burcu Karagol Ayan and Yonatan Bitton and Ceslee Montgomery and Yasumasa Onoe and Andrew Bunner and Ranjay Krishna and Jason Baldridge and Radu Soricut},
      year={2024},
      eprint={2405.02793},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```