winogavil / README.md
yonatanbitton's picture
Update README.md
9b6b30f
|
raw
history blame
4.33 kB
metadata
annotations_creators:
  - expert-generated
language:
  - en
language_creators:
  - found
license:
  - mit
multilinguality:
  - monolingual
paperswithcode_id: acronym-identification
pretty_name: WinoGAViL
size_categories:
  - 10K<n<100K
source_datasets:
  - original
tags:
  - commonsense-reasoning
  - visual-reasoning
task_categories:
  - token-classification
task_ids: []
train-eval-index:
  - col_mapping:
      labels: tags
      tokens: tokens
    config: default
    splits:
      eval_split: test
    task: token-classification
    task_id: entity_extraction
extra_gated_prompt: >-
  By clicking on “Access repository” below, you also agree that you are using it
  solely for research purposes. The full license agreement is available in the
  dataset files.

Dataset Card for WinoGAViL

Dataset Description

WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.

Supported Tasks and Leaderboards

https://winogavil.github.io/leaderboard. https://paperswithcode.com/dataset/winogavil.

Languages

English.

Dataset Structure

Data Fields

candidates (string). cue (string). associations (string). score_fool_the_ai (int64). num_associations (int64). annotation_index (int64). num_candidates (int64). solvers_jaccard_mean (float64). solvers_jaccard_std (float64). ID (int64).

Data Splits

-- 5 & 6 candidates. With 5 candidates, random chance for success is 38%. With 6 candidates, random chance for success is 34%.
-- 10 & 12 candidates. With 10 candidates, random chance for success is 24%. With 12 candidates, random chance for success is 19%.

Dataset Creation

Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players.

Annotations

Annotation process

We paid Amazon Mechanical Turk Workers to play our game.

Considerations for Using the Data

All associations were obtained with human annotators.

Licensing Information

CC-By 4.0

Citation Information

@article{bitton2022winogavil, title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models}, author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy}, journal={arXiv preprint arXiv:2207.12576}, year={2022}