File size: 3,359 Bytes
ff967e3
81b445e
 
 
 
 
443f8dc
81b445e
443f8dc
81b445e
 
 
 
 
2995089
81b445e
 
 
 
 
 
ff967e3
81b445e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b29ec2f
81b445e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f03dccd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Fig-QA
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
---

# Dataset Card for Fig-QA

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Splits](#data-splits)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

- **Repository:** https://github.com/nightingal3/Fig-QA 
- **Paper:** https://arxiv.org/abs/2204.12632
- **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa
- **Point of Contact:** emmy@cmu.edu

### Dataset Summary

This is the dataset for the paper [Testing the Ability of Language Models to Interpret Figurative Language](https://arxiv.org/abs/2204.12632). Fig-QA consists of 10256 examples of human-written creative metaphors that are paired as a Winograd schema. It can be used to evaluate the commonsense reasoning of models. The metaphors themselves can also be used as training data for other tasks, such as metaphor detection or generation. 

### Supported Tasks and Leaderboards

You can evaluate your models on the test set by submitting to the [leaderboard](https://explainaboard.inspiredco.ai/leaderboards?dataset=fig_qa) on Explainaboard. Click on "New" and select `qa-multiple-choice` for the task field. Select `accuracy` for the metric. You should upload results in the form of a system output file in JSON or JSONL format. 

### Languages

This is the English version. Multilingual version can be found [here](https://huggingface.co/datasets/cmu-lti/multi-figqa).

### Data Splits

Train-{S, M(no suffix), XL}: different training set sizes
Dev
Test (labels not provided for test set)

## Considerations for Using the Data

### Discussion of Biases

These metaphors are human-generated and may contain insults or other explicit content. Authors of the paper manually removed offensive content, but users should keep in mind that some potentially offensive content may remain in the dataset.

## Additional Information

### Licensing Information

MIT License

### Citation Information

If you found the dataset useful, please cite this paper:

    @misc{https://doi.org/10.48550/arxiv.2204.12632,
      doi = {10.48550/ARXIV.2204.12632},
      url = {https://arxiv.org/abs/2204.12632},
      author = {Liu, Emmy and Cui, Chen and Zheng, Kenneth and Neubig, Graham},
      keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Testing the Ability of Language Models to Interpret Figurative Language},
      publisher = {arXiv},
      year = {2022},
      copyright = {Creative Commons Attribution Share Alike 4.0 International}
    }