annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: CoCoCON
size_categories:
- 1K<n<10K
tags:
- consistency
- visual-reasoning
task_ids: []
Dataset Card for CoCoCON
Dataset Description
CocoCON is a challenging dataset for evaluating cross-task consistency in vision-and-language models. We use contrast sets created by modifying COCO test instances for multiple tasks in small but semantically meaningful ways to change the gold label, and outline metrics for measuring if a model is consistent by ranking the original and perturbed instances across tasks. We find that state-of-the-art systems suffer from a surprisingly high degree of inconsistent behavior across tasks, especially for more heterogeneous tasks.
- Homepage: https://adymaharana.github.io/cococon/
- Repository: https://github.com/adymaharana/cococon
- Paper: https://arxiv.org/abs/2303.16133
- Point of Contact: adyasha@cs.unc.edu;
Languages
English.
Dataset Structure
Each sample in this dataset corresponds to a COCO image, a set of ground truth annotations for the image captioning, visual question-answering (VQA), and localization (optional) tasks, and their respective contrast sets.
Data Fields
caption (string): ground truth caption.
query (string): VQA question.
answer (string): ground truth VQA answer.
question_id (int64): unordered unique identifier for sample.
image_id (int64): COCO image id.
detection (string): (optional) localization query.
boxes (list): (optional) list of ground truth bounding boxes for the localization query.
contrast_sets: Each sample in "contrast_sets" is a set of perturbed annotations corresponding to the ground truth annotations. Perturbed annotations are prefixed with "mutex_".
file_name (string): COCO filename for the image.
coco_url (string): url for downloading the image from the COCO server.
flickr_url (string): url for downloading the image from Flickr.
height (int64): height of image.
width (int64): width of image.
id (int64): ordered unique identifier for sample.
Data Splits
The CocoCON benchmark is an evaluation-only dataset. The data accessible through this link should be considered as the test split.
Dataset Creation
The CoCoCON dataset is created by a combination of machine + expert annotators who perturbed ground truth COCO annotations to create contrast sets.
Considerations for Using the Data
Licensing Information
CC-By 4.0
Citation Information
@article{maharana2023cococon, author = {Maharana, Adyasha and Kamath, Amita and Clark, Christopher and Bansal, Mohit and Kembhavi, Aniruddha}, title = {Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models.}, journal = {arxiv}, year = {2023}, }