Papers
arxiv:2406.10522

Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning

Published on Jun 15
· Submitted by jifanz on Jun 18
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

We present a novel multimodal preference dataset for creative tasks, consisting of over 250 million human ratings on more than 2.2 million captions, collected through crowdsourcing rating data for The New Yorker's weekly cartoon caption contest over the past eight years. This unique dataset supports the development and evaluation of multimodal large language models and preference-based fine-tuning algorithms for humorous caption generation. We propose novel benchmarks for judging the quality of model-generated captions, utilizing both GPT4 and human judgments to establish ranking-based evaluation strategies. Our experimental results highlight the limitations of current fine-tuning methods, such as RLHF and DPO, when applied to creative tasks. Furthermore, we demonstrate that even state-of-the-art models like GPT4 and Claude currently underperform top human contestants in generating humorous captions. As we conclude this extensive data collection effort, we release the entire preference dataset to the research community, fostering further advancements in AI humor generation and evaluation.

Community

Paper author Paper submitter
edited 10 days ago

Feel free to reach out with any questions!

Paper author Paper submitter

Creativity tasks have no best answer. How do we improve/evaluate LLMs in their creativity? We take a first crack by looking at humor and cartoon captioning. We release a preference dataset with over 250 million human ratings on more than 2.2 million captions for benchmarking and finetuning LLMs on humor generation.
Dataset: https://huggingface.co/datasets/yguooo/newyorker_caption_ranking
Codebase: https://github.com/yguooo/cartoon-caption-generation

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.10522 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.10522 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.10522 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.