File size: 7,982 Bytes
a7a10d1
311ef87
 
 
 
 
abd4fd5
 
311ef87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abd4fd5
f5b0ccf
8e4813d
f5b0ccf
 
 
 
 
 
 
 
 
 
 
abd4fd5
 
 
f5b0ccf
 
 
c383ade
8e4813d
c383ade
f5b0ccf
8e4813d
f5b0ccf
 
8e4813d
f5b0ccf
 
8e4813d
f5b0ccf
c383ade
8e4813d
c383ade
 
8e4813d
c383ade
f5b0ccf
8e4813d
f5b0ccf
c383ade
8e4813d
c383ade
 
8e4813d
c383ade
8e4813d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a7a10d1
 
 
 
 
 
 
e9ce8f5
a7a10d1
 
 
 
e9ce8f5
a7a10d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d77e38f
a7a10d1
311ef87
9fda0e1
311ef87
a7a10d1
bf20683
 
 
a7a10d1
d77e38f
a7a10d1
 
 
 
 
 
e9ce8f5
a7a10d1
 
 
d77e38f
a7a10d1
9fda0e1
a7a10d1
d77e38f
a7a10d1
d77e38f
a7a10d1
 
 
bf20683
 
 
a7a10d1
 
 
 
 
 
 
 
 
 
 
 
 
 
d77e38f
a7a10d1
 
 
 
 
 
 
 
 
 
e9ce8f5
a7a10d1
 
 
 
 
d77e38f
a7a10d1
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
e9ce8f5
 
 
 
 
 
a7a10d1
 
d77e38f
a7a10d1
e9ce8f5
 
 
 
 
 
a7a10d1
 
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
d77e38f
a7a10d1
 
 
d77e38f
a7a10d1
9fda0e1
a7a10d1
d77e38f
a7a10d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f5b0ccf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
- extended|hotpot_qa
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: anli
pretty_name: Adversarial NLI
dataset_info:
  config_name: plain_text
  features:
  - name: uid
    dtype: string
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  - name: reason
    dtype: string
  splits:
  - name: train_r1
    num_bytes: 8006888
    num_examples: 16946
  - name: dev_r1
    num_bytes: 573428
    num_examples: 1000
  - name: test_r1
    num_bytes: 574917
    num_examples: 1000
  - name: train_r2
    num_bytes: 20801581
    num_examples: 45460
  - name: dev_r2
    num_bytes: 556066
    num_examples: 1000
  - name: test_r2
    num_bytes: 572639
    num_examples: 1000
  - name: train_r3
    num_bytes: 44720719
    num_examples: 100459
  - name: dev_r3
    num_bytes: 663148
    num_examples: 1200
  - name: test_r3
    num_bytes: 657586
    num_examples: 1200
  download_size: 26286748
  dataset_size: 77126972
configs:
- config_name: plain_text
  data_files:
  - split: train_r1
    path: plain_text/train_r1-*
  - split: dev_r1
    path: plain_text/dev_r1-*
  - split: test_r1
    path: plain_text/test_r1-*
  - split: train_r2
    path: plain_text/train_r2-*
  - split: dev_r2
    path: plain_text/dev_r2-*
  - split: test_r2
    path: plain_text/test_r2-*
  - split: train_r3
    path: plain_text/train_r3-*
  - split: dev_r3
    path: plain_text/dev_r3-*
  - split: test_r3
    path: plain_text/test_r3-*
  default: true
---

# Dataset Card for "anli"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:**
- **Repository:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/)
- **Paper:** [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 18.62 MB
- **Size of the generated dataset:** 77.12 MB
- **Total amount of disk used:** 95.75 MB

### Dataset Summary

The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
ANLI is much more difficult than its predecessors including SNLI and MNLI.
It contains three rounds. Each round has train/dev/test splits.

### Supported Tasks and Leaderboards

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Languages

English

## Dataset Structure

### Data Instances

#### plain_text

- **Size of downloaded dataset files:** 18.62 MB
- **Size of the generated dataset:** 77.12 MB
- **Total amount of disk used:** 95.75 MB

An example of 'train_r2' looks as follows.
```
This example was too long and was cropped:

{
    "hypothesis": "Idris Sultan was born in the first month of the year preceding 1994.",
    "label": 0,
    "premise": "\"Idris Sultan (born January 1993) is a Tanzanian Actor and comedian, actor and radio host who won the Big Brother Africa-Hotshot...",
    "reason": "",
    "uid": "ed5c37ab-77c5-4dbc-ba75-8fd617b19712"
}
```

### Data Fields

The data fields are the same among all splits.

#### plain_text
- `uid`: a `string` feature.
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `reason`: a `string` feature.

### Data Splits

|   name   |train_r1|dev_r1|train_r2|dev_r2|train_r3|dev_r3|test_r1|test_r2|test_r3|
|----------|-------:|-----:|-------:|-----:|-------:|-----:|------:|------:|------:|
|plain_text|   16946|  1000|   45460|  1000|  100459|  1200|   1000|   1000|   1200|

## Dataset Creation

### Curation Rationale

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

#### Who are the source language producers?

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Annotations

#### Annotation process

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

#### Who are the annotators?

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Personal and Sensitive Information

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Discussion of Biases

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Other Known Limitations

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Additional Information

### Dataset Curators

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Licensing Information

[cc-4 Attribution-NonCommercial](https://github.com/facebookresearch/anli/blob/main/LICENSE)

### Citation Information

```
@InProceedings{nie2019adversarial,
    title={Adversarial NLI: A New Benchmark for Natural Language Understanding},
    author={Nie, Yixin
                and Williams, Adina
                and Dinan, Emily
                and Bansal, Mohit
                and Weston, Jason
                and Kiela, Douwe},
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    year = "2020",
    publisher = "Association for Computational Linguistics",
}

```


### Contributions

Thanks to [@thomwolf](https://github.com/thomwolf), [@easonnie](https://github.com/easonnie), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.