marvl / README.md
floschne's picture
Upload dataset
d25b0dc verified
metadata
language:
  - id
  - sw
  - ta
  - tr
  - zh
  - en
license: cc-by-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - visual-question-answering
pretty_name: MaRVL
dataset_info:
  features:
    - name: id
      dtype: string
    - name: hypothesis
      dtype: string
    - name: hypo_en
      dtype: string
    - name: language
      dtype: string
    - name: label
      dtype: bool
    - name: chapter
      dtype: string
    - name: concept
      dtype: string
    - name: annotator_info
      struct:
        - name: age
          dtype: int64
        - name: annotator_id
          dtype: string
        - name: country_of_birth
          dtype: string
        - name: country_of_residence
          dtype: string
        - name: gender
          dtype: string
    - name: left_img_id
      dtype: string
    - name: right_img_id
      dtype: string
    - name: left_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
    - name: right_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
    - name: resized_left_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
    - name: resized_right_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
    - name: vertically_stacked_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
    - name: horizontally_stacked_img
      struct:
        - name: bytes
          dtype: binary
        - name: path
          dtype: 'null'
  splits:
    - name: id
      num_bytes: 2079196646
      num_examples: 1128
    - name: sw
      num_bytes: 899838181
      num_examples: 1108
    - name: ta
      num_bytes: 801784098
      num_examples: 1242
    - name: tr
      num_bytes: 1373652829
      num_examples: 1180
    - name: zh
      num_bytes: 1193602152
      num_examples: 1012
  download_size: 6234764237
  dataset_size: 6348073906
configs:
  - config_name: default
    data_files:
      - split: id
        path: data/id-*
      - split: sw
        path: data/sw-*
      - split: ta
        path: data/ta-*
      - split: tr
        path: data/tr-*
      - split: zh
        path: data/zh-*

MaRVL

This is a copy from the original repo: https://github.com/marvl-challenge/marvl-code

If you use this dataset, please cite the original authors:

@inproceedings{liu-etal-2021-visually,
    title = "Visually Grounded Reasoning across Languages and Cultures",
    author = "Liu, Fangyu  and
      Bugliarello, Emanuele  and
      Ponti, Edoardo Maria  and
      Reddy, Siva  and
      Collier, Nigel  and
      Elliott, Desmond",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.818",
    pages = "10467--10485",
}

Additional data

In addition to the data available in the original repo, this dataset contains the following columns

  • en_translation --> English translation of the hypothesis created using Bing Translate
  • left_img --> PIL Image
  • right_img--> PIL Image
  • resized_left_img --> PIL Image resized
  • resized_right_img --> PIL Image resized
  • vertically_stacked_img --> PIL image that contains the left and right resized images stacked vertically with a black gutter of 10px
  • horizontally_stacked_img --> PIL image that contains the left and right resized images stacked horizontally with a black gutter of 10px

The images were resized using img2dataset:

Show code snippet
Resizer(
  image_size=640,
  resize_mode=ResizeMode.keep_ratio,
  resize_only_if_bigger=True,
)

How to read the images

Due to a bug, the images cannot be stored as PIL.Image.Images directly but need to be converted to dataset.Images-. Hence, to load them, this additional step is required:

from datasets import Image, load_dataset

ds = load_dataset("floschne/marvl", split="sw")
ds.map(
    lambda sample: {
        "left_img_t": [Image().decode_example(img) for img in sample["left_img"]],
        "right_img_t": [Image().decode_example(img) for img in sample["right_img"]],
        "resized_left_img_t": [
            Image().decode_example(img) for img in sample["resized_left_img"]
        ],
        "resized_right_img_t": [
            Image().decode_example(img) for img in sample["resized_right_img"]
        ],
        "vertically_stacked_img_t": [
            Image().decode_example(img) for img in sample["vertically_stacked_img"]
        ],
        "horizontally_stacked_img_t": [
            Image().decode_example(img) for img in sample["horizontally_stacked_img"]
        ],
    },
    remove_columns=[
        "left_img",
        "right_img",
        "resized_left_img",
        "resized_right_img",
        "vertically_stacked_img",
        "horizontally_stacked_img",
    ],
).rename_columns(
    {
        "left_img_t": "left_img",
        "right_img_t": "right_img",
        "resized_left_img_t": "resized_left_img",
        "resized_right_img_t": "resized_right_img",
        "vertically_stacked_img_t": "vertically_stacked_img",
        "horizontally_stacked_img_t": "horizontally_stacked_img",
    }
)