burcuka commited on
Commit
00da411
1 Parent(s): dd9ec92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md CHANGED
@@ -52,8 +52,115 @@ dataset = load_dataset("google/imageinwords", token="YOUR_HF_ACCESS_TOKEN", name
52
  <li><a href="https://huggingface.co/spaces/google/imageinwords-explorer">Dataset-Explorer</a></li>
53
 
54
 
 
55
 
 
 
56
 
 
57
 
 
 
58
 
 
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  <li><a href="https://huggingface.co/spaces/google/imageinwords-explorer">Dataset-Explorer</a></li>
53
 
54
 
55
+ ## Dataset Description
56
 
57
+ - **Homepage:** https://google.github.io/imageinwords/
58
+ - **Point of Contact:** iiw-dataset@google.com
59
 
60
+ ### Dataset Summary
61
 
62
+ ImageInWords (IIW), a carefully designed human-in-the-loop annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process.
63
+ We validate the framework through evaluations focused on the quality of the dataset and its utility for fine-tuning with considerations for readability, comprehensiveness, specificity, hallucinations, and human-likeness.
64
 
65
+ This Data Card describes a mixture of human annotated and machine generated data intended to help create and capture rich, hyper-detailed image descriptions.
66
 
67
+ IIW dataset has two parts: human annotations and model outputs. The main purposes of this dataset are:
68
+ (1) to provide samples from SoTA human authored outputs to promote discussion on annotation guidelines to further improve the quality
69
+ (2) to provide human SxS results and model outputs to promote development of automatic metrics to mimic human SxS judgements.
70
+
71
+ ### Supported Tasks
72
+
73
+ Text-to-Image, Image-to-Text, Object Detection
74
+
75
+ ### Languages
76
+
77
+ English
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ ### Data Fields
84
+
85
+ IIW:
86
+ - `image/key`
87
+ - `image/url`
88
+ - `IIW`: Human generated image description
89
+ - `IIW-P5B`: Machine generated image description
90
+ - `iiw-human-sxs-gpt4v` and `iiw-human-sxs-iiw-p5b`: human SxS metrics
91
+ - metrics/Comprehensiveness
92
+ - metrics/Specificity
93
+ - metrics/Hallucination
94
+ - metrics/First few line(s) as tldr
95
+ - metrics/Human Like
96
+
97
+ DCI:
98
+ - `image`
99
+ - `image/url`
100
+ - `ex_id`
101
+ - `IIW`: Human generated image description
102
+ - `metrics/Comprehensiveness`
103
+ - `metrics/Specificity`
104
+ - `metrics/Hallucination`
105
+ - `metrics/First few line(s) as tldr`
106
+ - `metrics/Human Like`
107
+
108
+ DOCCI:
109
+ - `image`
110
+ - `image/url`
111
+ - `image/thumbnail_url`
112
+ - `IIW`: Human generated image description
113
+ - `DOCCI`: Image description from DOCCI
114
+ - `metrics/Comprehensiveness`
115
+ - `metrics/Specificity`
116
+ - `metrics/Hallucination`
117
+ - `metrics/First few line(s) as tldr`
118
+ - `metrics/Human Like`
119
+
120
+ LocNar:
121
+ - `image/key`
122
+ - `image/url`
123
+ - `IIW-P5B`: Machine generated image description
124
+
125
+ CM3600:
126
+ - `image/key`
127
+ - `image/url`
128
+ - `IIW-P5B`: Machine generated image description
129
+
130
+ Please note that all fields are string.
131
+
132
+ ### Data Splits
133
+
134
+ Dataset | Size
135
+ ---| ---:
136
+ IIW | 400
137
+ DCI | 112
138
+ DOCCI | 100
139
+ LocNar | 1000
140
+ CS3600 | 1000
141
+
142
+ ### Annotations
143
+
144
+ #### Annotation process
145
+
146
+ Some text descriptions were written by human annotators and some were generated by machine models.
147
+ The metrics are all from human SxS.
148
+
149
+ ### Personal and Sensitive Information
150
+
151
+ The images that were used for the descriptions and the machine generated text descriptions are checked (by algorithmic methods and manual inspection) for S/PII, pornographic content, and violence and any we found may contain such information have been filtered.
152
+ We asked that human annotators use an objective and respectful language for the image descriptions.
153
+
154
+ ### Licensing Information
155
+
156
+ CC BY 4.0
157
+
158
+ ### Citation Information
159
+
160
+ ```
161
+ @inproceedings{Garg2024IIW,
162
+ author = {Roopal Garg and Andrea Burns and Burcu Karagol Ayan and Yonatan Bitton and Ceslee Montgomery and Yasumasa Onoe and Andrew Bunner and Ranjay Krishna and Jason Baldridge and Radu Soricut},
163
+ title = {{ImageInWords: Unlocking Hyper-Detailed Image Descriptions}},
164
+ booktitle = {arXiv},
165
+ year = {2024}
166
+ ```