vsr_zeroshot / README.md
Fangyu Liu
Update README.md
148b377
---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- multimodal
- vision-and-language
pretty_name: VSR (zeroshot)
size_categories:
- 1K<n<10K
---
# VSR: Visual Spatial Reasoning
This is the **zero-shot set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_zeroshot", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
```