system HF staff commited on
Commit
a7a10d1
1 Parent(s): 4e4fadf

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "anli"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 17.76 MB
37
+ - **Size of the generated dataset:** 73.55 MB
38
+ - **Total amount of disk used:** 91.31 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
43
+ The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
44
+ ANLI is much more difficult than its predecessors including SNLI and MNLI.
45
+ It contains three rounds. Each round has train/dev/test splits.
46
+
47
+ ### [Supported Tasks](#supported-tasks)
48
+
49
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
50
+
51
+ ### [Languages](#languages)
52
+
53
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+
55
+ ## [Dataset Structure](#dataset-structure)
56
+
57
+ We show detailed information for up to 5 configurations of the dataset.
58
+
59
+ ### [Data Instances](#data-instances)
60
+
61
+ #### plain_text
62
+
63
+ - **Size of downloaded dataset files:** 17.76 MB
64
+ - **Size of the generated dataset:** 73.55 MB
65
+ - **Total amount of disk used:** 91.31 MB
66
+
67
+ An example of 'train_r2' looks as follows.
68
+ ```
69
+ This example was too long and was cropped:
70
+
71
+ {
72
+ "hypothesis": "Idris Sultan was born in the first month of the year preceding 1994.",
73
+ "label": 0,
74
+ "premise": "\"Idris Sultan (born January 1993) is a Tanzanian Actor and comedian, actor and radio host who won the Big Brother Africa-Hotshot...",
75
+ "reason": "",
76
+ "uid": "ed5c37ab-77c5-4dbc-ba75-8fd617b19712"
77
+ }
78
+ ```
79
+
80
+ ### [Data Fields](#data-fields)
81
+
82
+ The data fields are the same among all splits.
83
+
84
+ #### plain_text
85
+ - `uid`: a `string` feature.
86
+ - `premise`: a `string` feature.
87
+ - `hypothesis`: a `string` feature.
88
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
89
+ - `reason`: a `string` feature.
90
+
91
+ ### [Data Splits Sample Size](#data-splits-sample-size)
92
+
93
+ | name |train_r1|dev_r1|train_r2|dev_r2|train_r3|dev_r3|test_r1|test_r2|test_r3|
94
+ |----------|-------:|-----:|-------:|-----:|-------:|-----:|------:|------:|------:|
95
+ |plain_text| 16946| 1000| 45460| 1000| 100459| 1200| 1000| 1000| 1200|
96
+
97
+ ## [Dataset Creation](#dataset-creation)
98
+
99
+ ### [Curation Rationale](#curation-rationale)
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ### [Source Data](#source-data)
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### [Annotations](#annotations)
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
112
+
113
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
+
115
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
116
+
117
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ### [Discussion of Biases](#discussion-of-biases)
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ ### [Other Known Limitations](#other-known-limitations)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ## [Additional Information](#additional-information)
130
+
131
+ ### [Dataset Curators](#dataset-curators)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ### [Licensing Information](#licensing-information)
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### [Citation Information](#citation-information)
140
+
141
+ ```
142
+ @InProceedings{nie2019adversarial,
143
+ title={Adversarial NLI: A New Benchmark for Natural Language Understanding},
144
+ author={Nie, Yixin
145
+ and Williams, Adina
146
+ and Dinan, Emily
147
+ and Bansal, Mohit
148
+ and Weston, Jason
149
+ and Kiela, Douwe},
150
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
151
+ year = "2020",
152
+ publisher = "Association for Computational Linguistics",
153
+ }
154
+
155
+ ```
156
+
157
+
158
+ ### Contributions
159
+
160
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@easonnie](https://github.com/easonnie), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.