jmhessel commited on
Commit
24b4584
1 Parent(s): 801f63d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +332 -0
README.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ - found
6
+ language:
7
+ - en
8
+ language_creators:
9
+ - crowdsourced
10
+ - expert-generated
11
+ license:
12
+ - cc-by-4.0
13
+ multilinguality:
14
+ - monolingual
15
+ pretty_name: newyorker_caption_contest
16
+ size_categories:
17
+ - 1K<n<10K
18
+ source_datasets:
19
+ - original
20
+ tags:
21
+ - humor
22
+ - caption contest
23
+ - new yorker
24
+ task_categories:
25
+ - image-to-text
26
+ - multiple-choice
27
+ - text-classification
28
+ - text-generation
29
+ - visual-question-answering
30
+ - other
31
+ - text2text-generation
32
+ task_ids:
33
+ - multi-class-classification
34
+ - language-modeling
35
+ - visual-question-answering
36
+ - explanation-generation
37
+ ---
38
+
39
+ # Dataset Card for New Yorker Caption Contest Benchmarks
40
+
41
+ ## Table of Contents
42
+ - [Table of Contents](#table-of-contents)
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-fields)
50
+ - [Data Splits](#data-splits)
51
+ - [Dataset Creation](#dataset-creation)
52
+ - [Curation Rationale](#curation-rationale)
53
+ - [Source Data](#source-data)
54
+ - [Annotations](#annotations)
55
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
56
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
57
+ - [Social Impact of Dataset](#social-impact-of-dataset)
58
+ - [Discussion of Biases](#discussion-of-biases)
59
+ - [Other Known Limitations](#other-known-limitations)
60
+ - [Additional Information](#additional-information)
61
+ - [Dataset Curators](#dataset-curators)
62
+ - [Licensing Information](#licensing-information)
63
+ - [Citation Information](#citation-information)
64
+ - [Contributions](#contributions)
65
+
66
+ ## Dataset Description
67
+
68
+ - **Homepage:** [capcon.dev](https://www.capcon.dev)
69
+ - **Repository:** [https://github.com/jmhessel/caption_contest_corpus](https://github.com/jmhessel/caption_contest_corpus)
70
+ - **Paper:** [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
71
+ - **Leaderboard:** No official leaderboard (yet).
72
+ - **Point of Contact:** jackh@allenai.org
73
+
74
+ ### Dataset Summary
75
+
76
+ We challenge AI models to "demonstrate understanding" of the
77
+ sophisticated multimodal humor of The New Yorker Caption Contest.
78
+ Concretely, we develop three carefully circumscribed tasks for which
79
+ it suffices (but is not necessary) to grasp potentially complex and
80
+ unexpected relationships between image and caption, and similarly
81
+ complex and unexpected allusions to the wide varieties of human
82
+ experience.
83
+
84
+
85
+ ### Supported Tasks and Leaderboards
86
+
87
+ Three tasks are supported:
88
+
89
+ - "Matching:" a model must recognize a caption written about a cartoon (vs. options that were not);
90
+ - "Quality ranking:" a model must evaluate the quality of a caption by scoring it more highly than a lower quality option from the same contest;
91
+ - "Explanation:" a model must explain why a given joke is funny.
92
+
93
+ There are no official leaderboards (yet).
94
+
95
+ ### Languages
96
+
97
+ English
98
+
99
+ ## Dataset Structure
100
+
101
+ Here's an example instance from Matching:
102
+ ```
103
+ {'caption_choices': ['Tell me about your childhood very quickly.',
104
+ "Believe me . . . it's what's UNDER the ground that's "
105
+ 'most interesting.',
106
+ "Stop me if you've heard this one.",
107
+ 'I have trouble saying no.',
108
+ 'Yes, I see the train but I think we can beat it.'],
109
+ 'contest_number': 49,
110
+ 'entities': ['https://en.wikipedia.org/wiki/Rule_of_three_(writing)',
111
+ 'https://en.wikipedia.org/wiki/Bar_joke',
112
+ 'https://en.wikipedia.org/wiki/Religious_institute'],
113
+ 'from_description': 'scene: a bar description: Two priests and a rabbi are '
114
+ 'walking into a bar, as the bartender and another patron '
115
+ 'look on. The bartender talks on the phone while looking '
116
+ 'skeptically at the incoming crew. uncanny: The scene '
117
+ 'depicts a very stereotypical "bar joke" that would be '
118
+ 'unlikely to be encountered in real life; the skepticism '
119
+ 'of the bartender suggests that he is aware he is seeing '
120
+ 'this trope, and is explaining it to someone on the '
121
+ 'phone. entities: Rule_of_three_(writing), Bar_joke, '
122
+ 'Religious_institute. choices A: Tell me about your '
123
+ "childhood very quickly. B: Believe me . . . it's what's "
124
+ "UNDER the ground that's most interesting. C: Stop me if "
125
+ "you've heard this one. D: I have trouble saying no. E: "
126
+ 'Yes, I see the train but I think we can beat it.',
127
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=323x231 at 0x7F34F283E9D0>,
128
+ 'image_description': 'Two priests and a rabbi are walking into a bar, as the '
129
+ 'bartender and another patron look on. The bartender '
130
+ 'talks on the phone while looking skeptically at the '
131
+ 'incoming crew.',
132
+ 'image_location': 'a bar',
133
+ 'image_uncanny_description': 'The scene depicts a very stereotypical "bar '
134
+ 'joke" that would be unlikely to be encountered '
135
+ 'in real life; the skepticism of the bartender '
136
+ 'suggests that he is aware he is seeing this '
137
+ 'trope, and is explaining it to someone on the '
138
+ 'phone.',
139
+ 'instance_id': '21125bb8787b4e7e82aa3b0a1cba1571',
140
+ 'label': 'C',
141
+ 'n_tokens_label': 1,
142
+ 'questions': ['What is the bartender saying on the phone in response to the '
143
+ 'living, breathing, stereotypical bar joke that is unfolding?']}
144
+ ```
145
+
146
+ The label "C" indicates that the 3rd choice in the `caption_choices` is correct.
147
+
148
+ Here's an example instance from Ranking (in the from pixels setting --- though, this is also available in the from description setting)
149
+ ```
150
+ {'caption_choices': ['I guess I misunderstood when you said long bike ride.',
151
+ 'Does your divorce lawyer have any other cool ideas?'],
152
+ 'contest_number': 582,
153
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=600x414 at 0x7F8FF9F96610>,
154
+ 'instance_id': 'dd1c214a1ca3404aa4e582c9ce50795a',
155
+ 'label': 'A',
156
+ 'n_tokens_label': 1,
157
+ 'winner_source': 'official_winner'}
158
+ ```
159
+ the label indicates that the first caption choice ("A", here) in the `caption_choices` list was more highly rated.
160
+
161
+
162
+ Here's an example instance from Explanation:
163
+ ```
164
+ {'caption_choices': 'The classics can be so intimidating.',
165
+ 'contest_number': 752,
166
+ 'entities': ['https://en.wikipedia.org/wiki/Literature',
167
+ 'https://en.wikipedia.org/wiki/Solicitor'],
168
+ 'from_description': 'scene: a road description: Two people are walking down a '
169
+ 'path. A number of giant books have surrounded them. '
170
+ 'uncanny: There are book people in this world. entities: '
171
+ 'Literature, Solicitor. caption: The classics can be so '
172
+ 'intimidating.',
173
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=800x706 at 0x7F90003D0BB0>,
174
+ 'image_description': 'Two people are walking down a path. A number of giant '
175
+ 'books have surrounded them.',
176
+ 'image_location': 'a road',
177
+ 'image_uncanny_description': 'There are book people in this world.',
178
+ 'instance_id': 'eef9baf450e2fab19b96facc128adf80',
179
+ 'label': 'A play on the word intimidating --- usually if the classics (i.e., '
180
+ 'classic novels) were to be intimidating, this would mean that they '
181
+ 'are intimidating to read due to their length, complexity, etc. But '
182
+ 'here, they are surrounded by anthropomorphic books which look '
183
+ 'physically intimidating, i.e., they are intimidating because they '
184
+ 'may try to beat up these people.',
185
+ 'n_tokens_label': 59,
186
+ 'questions': ['What do the books want?']}
187
+ ```
188
+ The label is an explanation of the joke, which serves as the autoregressive target.
189
+
190
+ ### Data Instances
191
+
192
+ See above
193
+
194
+ ### Data Fields
195
+
196
+ See above
197
+
198
+ ### Data Splits
199
+
200
+ Data splits can be accessed as:
201
+ ```
202
+ from datasets import load_dataset
203
+ dset = load_dataset("newyorker_caption_contest", "matching")
204
+ dset = load_dataset("newyorker_caption_contest", "ranking")
205
+ dset = load_dataset("newyorker_caption_contest", "explanation")
206
+ ```
207
+
208
+ Or, in the from pixels setting, e.g.,
209
+ ```
210
+ dset = load_dataset("newyorker_caption_contest", "ranking_from_pixels
211
+ ```
212
+
213
+ Because the dataset is small, we reported in 5-fold cross-validation setting initially. The default splits are split 0. You can access the other splits, e.g.:
214
+
215
+ ```
216
+ # the 4th data split
217
+ dset = load_dataset("newyorker_caption_contest", "explanation_4")
218
+ ```
219
+
220
+ ## Dataset Creation
221
+
222
+ Full details are in the paper.
223
+
224
+ ### Curation Rationale
225
+
226
+ See the paper for rationale/motivation.
227
+
228
+ ### Source Data
229
+
230
+ See citation below. We combined 3 sources of data, and added significant annotations of our own.
231
+
232
+ #### Initial Data Collection and Normalization
233
+
234
+ Full details are in the paper.
235
+
236
+ #### Who are the source language producers?
237
+
238
+ We paid crowdworkers $15/hr to annotate the corpus.
239
+ In addition, significant annotation efforts were conducted by the authors of this work.
240
+
241
+ ### Annotations
242
+
243
+ Full details are in the paper.
244
+
245
+ #### Annotation process
246
+
247
+ Full details are in the paper.
248
+
249
+ #### Who are the annotators?
250
+
251
+ A mix of crowdworks and authors of this paper.
252
+
253
+ ### Personal and Sensitive Information
254
+
255
+ Has been redacted from the dataset. Images are published in the New Yorker already.
256
+
257
+ ## Considerations for Using the Data
258
+
259
+ ### Social Impact of Dataset
260
+
261
+ It's plausible that humor could perpetuate negative stereotypes. The jokes in this corpus are a mix of crowdsourced entries that are highly rated, and ones published in the new yorker.
262
+
263
+ ### Discussion of Biases
264
+
265
+ Humor is subjective, and some of the jokes may be considered offensive. The images may contain adult themes and minor cartoon nudity.
266
+
267
+ ### Other Known Limitations
268
+
269
+ More details are in the paper
270
+
271
+ ## Additional Information
272
+
273
+ ### Dataset Curators
274
+
275
+ The dataset was curated by researchers at AI2
276
+
277
+ ### Licensing Information
278
+
279
+ The annotations we provide are CC-BY-4.0. See www.capcon.dev for more info.
280
+
281
+ ### Citation Information
282
+
283
+
284
+ ```
285
+ @article{hessel2022androids,
286
+ title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest},
287
+ author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
288
+ journal={arXiv preprint arXiv:2209.06293},
289
+ year={2022}
290
+ }
291
+ ```
292
+
293
+ Our data contributions are:
294
+
295
+ - The cartoon-level annotations;
296
+ - The joke explanations;
297
+ - and the framing of the tasks
298
+
299
+ We release these data we contribute under CC-BY (see DATASET_LICENSE). If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived:
300
+
301
+ ```
302
+ @misc{newyorkernextmldataset,
303
+ author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott},
304
+ title={The {N}ew {Y}orker Cartoon Caption Contest Dataset},
305
+ year={2020},
306
+ url={https://nextml.github.io/caption-contest-data/}
307
+ }
308
+
309
+ @inproceedings{radev-etal-2016-humor,
310
+ title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest",
311
+ author = "Radev, Dragomir and
312
+ Stent, Amanda and
313
+ Tetreault, Joel and
314
+ Pappu, Aasish and
315
+ Iliakopoulou, Aikaterini and
316
+ Chanfreau, Agustin and
317
+ de Juan, Paloma and
318
+ Vallmitjana, Jordi and
319
+ Jaimes, Alejandro and
320
+ Jha, Rahul and
321
+ Mankoff, Robert",
322
+ booktitle = "LREC",
323
+ year = "2016",
324
+ }
325
+
326
+ @inproceedings{shahaf2015inside,
327
+ title={Inside jokes: Identifying humorous cartoon captions},
328
+ author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert},
329
+ booktitle={KDD},
330
+ year={2015},
331
+ }
332
+ ```