xvnli / README.md
floschne's picture
Upload dataset
3629bd9 verified
---
language:
- ar
- en
- es
- fr
- ru
license: mit
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
pretty_name: XVNLI
dataset_info:
features:
- name: label
dtype: string
- name: caption
dtype: string
- name: hypothesis
dtype: string
- name: caption_id
dtype: string
- name: pair_id
dtype: string
- name: flikr30k_id
dtype: string
- name: image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: ar
num_bytes: 45192381
num_examples: 1164
- name: en
num_bytes: 45141859
num_examples: 1164
- name: es
num_bytes: 45162738
num_examples: 1164
- name: fr
num_bytes: 45161740
num_examples: 1164
- name: ru
num_bytes: 45256629
num_examples: 1164
download_size: 70974300
dataset_size: 225915347
configs:
- config_name: default
data_files:
- split: ar
path: data/ar-*
- split: en
path: data/en-*
- split: es
path: data/es-*
- split: fr
path: data/fr-*
- split: ru
path: data/ru-*
---
# XVNLI
### This is a copy from the original repo: https://github.com/e-bug/iglue
If you use this dataset, please cite the original authors:
```bibtex
@inproceedings{bugliarello-etal-2022-iglue,
title = {{IGLUE}: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages},
author = {Bugliarello, Emanuele and Liu, Fangyu and Pfeiffer, Jonas and Reddy, Siva and Elliott, Desmond and Ponti, Edoardo Maria and Vuli{\'c}, Ivan},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {2370--2392},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/bugliarello22a/bugliarello22a.pdf},
url = {https://proceedings.mlr.press/v162/bugliarello22a.html},
}
```
### How to read the image
Due to a [bug](https://github.com/huggingface/datasets/issues/4796), the images cannot be stored as PIL.Image.Images directly but need to be converted to dataset.Images-. Hence, to load them, this additional step is required:
```python
from datasets import Image, load_dataset
ds = load_dataset("floschne/xvnli", split="en")
ds.map(
lambda sample: {
"image_t": [Image().decode_example(img) for img in sample["image"]],
},
remove_columns=["image"],
).rename_columns({"image_t": "image"})
```