You need to share your contact information to access this dataset.

This repository is publicly accessible, but you have to register to access its content — don't worry, it's just one click!

By clicking on “Access repository” below, you accept that your contact information (email address and username) can be shared with the repository authors. This will let the authors get in touch for instance if some parts of the repository's contents need to be taken down for licensing reasons.

By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files.

You will immediately be granted access to the contents of the dataset.

Access repository

Dataset Card for Winoground

Dataset Description

Winoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.


The captions and tags are located in data/examples.jsonl and the images are located in data/ You can load the data as follows:

from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)

You can get <YOUR USER ACCESS TOKEN> by following these steps:

  1. log into your Hugging Face account
  2. click on your profile picture
  3. click "Settings"
  4. click "Access Tokens"
  5. generate an access token

Model Predictions and Statistics

The image-caption model scores from our paper are saved in statistics/model_scores. To compute many of the tables and graphs from our paper, run the following commands:

git clone
cd winoground
pip install -r statistics/requirements.txt
python statistics/

Check out the Colab notebook code for evaluating CLIP on Winoground

Citation Information

Tristan Thrush and Candace Ross contributed equally.

  author = {Tristan Thrush and Ryan Jiang and Max Bartolo and Amanpreet Singh and Adina Williams and Douwe Kiela and Candace Ross},
  title = {Winoground: Probing vision and language models for visio-linguistic compositionality},
  booktitle = {CVPR},
  year = 2022,
Edit dataset card