Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
crowdsourced
found
Annotations Creators:
crowdsourced
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
26f6155
1 Parent(s): 8663a53

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +161 -0
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "multi_nli"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 216.34 MB
37
+ - **Size of the generated dataset:** 73.39 MB
38
+ - **Total amount of disk used:** 289.74 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
43
+ crowd-sourced collection of 433k sentence pairs annotated with textual
44
+ entailment information. The corpus is modeled on the SNLI corpus, but differs in
45
+ that covers a range of genres of spoken and written text, and supports a
46
+ distinctive cross-genre generalization evaluation. The corpus served as the
47
+ basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
48
+
49
+ ### [Supported Tasks](#supported-tasks)
50
+
51
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
+
53
+ ### [Languages](#languages)
54
+
55
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+
57
+ ## [Dataset Structure](#dataset-structure)
58
+
59
+ We show detailed information for up to 5 configurations of the dataset.
60
+
61
+ ### [Data Instances](#data-instances)
62
+
63
+ #### plain_text
64
+
65
+ - **Size of downloaded dataset files:** 216.34 MB
66
+ - **Size of the generated dataset:** 73.39 MB
67
+ - **Total amount of disk used:** 289.74 MB
68
+
69
+ An example of 'validation_matched' looks as follows.
70
+ ```
71
+ {
72
+ "hypothesis": "flammable",
73
+ "label": 0,
74
+ "premise": "inflammable"
75
+ }
76
+ ```
77
+
78
+ ### [Data Fields](#data-fields)
79
+
80
+ The data fields are the same among all splits.
81
+
82
+ #### plain_text
83
+ - `premise`: a `string` feature.
84
+ - `hypothesis`: a `string` feature.
85
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
86
+
87
+ ### [Data Splits Sample Size](#data-splits-sample-size)
88
+
89
+ | name |train |validation_matched|validation_mismatched|
90
+ |----------|-----:|-----------------:|--------------------:|
91
+ |plain_text|392702| 9815| 9832|
92
+
93
+ ## [Dataset Creation](#dataset-creation)
94
+
95
+ ### [Curation Rationale](#curation-rationale)
96
+
97
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
+
99
+ ### [Source Data](#source-data)
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ### [Annotations](#annotations)
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
112
+
113
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
114
+
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+
117
+ ### [Discussion of Biases](#discussion-of-biases)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ### [Other Known Limitations](#other-known-limitations)
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ ## [Additional Information](#additional-information)
126
+
127
+ ### [Dataset Curators](#dataset-curators)
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ### [Licensing Information](#licensing-information)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ### [Citation Information](#citation-information)
136
+
137
+ ```
138
+ @InProceedings{N18-1101,
139
+ author = "Williams, Adina
140
+ and Nangia, Nikita
141
+ and Bowman, Samuel",
142
+ title = "A Broad-Coverage Challenge Corpus for
143
+ Sentence Understanding through Inference",
144
+ booktitle = "Proceedings of the 2018 Conference of
145
+ the North American Chapter of the
146
+ Association for Computational Linguistics:
147
+ Human Language Technologies, Volume 1 (Long
148
+ Papers)",
149
+ year = "2018",
150
+ publisher = "Association for Computational Linguistics",
151
+ pages = "1112--1122",
152
+ location = "New Orleans, Louisiana",
153
+ url = "http://aclweb.org/anthology/N18-1101"
154
+ }
155
+
156
+ ```
157
+
158
+
159
+ ### Contributions
160
+
161
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.