system HF staff commited on
Commit
25c08af
1 Parent(s): 8ae9dc9

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +185 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "cos_e"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/salesforce/cos-e](https://github.com/salesforce/cos-e)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 10.33 MB
37
+ - **Size of the generated dataset:** 5.14 MB
38
+ - **Total amount of disk used:** 15.47 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Common Sense Explanations (CoS-E) allows for training language models to
43
+ automatically generate explanations that can be used during training and
44
+ inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.
45
+
46
+ ### [Supported Tasks](#supported-tasks)
47
+
48
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
+
50
+ ### [Languages](#languages)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ## [Dataset Structure](#dataset-structure)
55
+
56
+ We show detailed information for up to 5 configurations of the dataset.
57
+
58
+ ### [Data Instances](#data-instances)
59
+
60
+ #### v1.0
61
+
62
+ - **Size of downloaded dataset files:** 4.10 MB
63
+ - **Size of the generated dataset:** 2.23 MB
64
+ - **Total amount of disk used:** 6.33 MB
65
+
66
+ An example of 'train' looks as follows.
67
+ ```
68
+ {
69
+ "abstractive_explanation": "this is open-ended",
70
+ "answer": "b",
71
+ "choices": ["a", "b", "c"],
72
+ "extractive_explanation": "this is selected train",
73
+ "id": "42",
74
+ "question": "question goes here."
75
+ }
76
+ ```
77
+
78
+ #### v1.11
79
+
80
+ - **Size of downloaded dataset files:** 6.23 MB
81
+ - **Size of the generated dataset:** 2.91 MB
82
+ - **Total amount of disk used:** 9.14 MB
83
+
84
+ An example of 'train' looks as follows.
85
+ ```
86
+ {
87
+ "abstractive_explanation": "this is open-ended",
88
+ "answer": "b",
89
+ "choices": ["a", "b", "c"],
90
+ "extractive_explanation": "this is selected train",
91
+ "id": "42",
92
+ "question": "question goes here."
93
+ }
94
+ ```
95
+
96
+ ### [Data Fields](#data-fields)
97
+
98
+ The data fields are the same among all splits.
99
+
100
+ #### v1.0
101
+ - `id`: a `string` feature.
102
+ - `question`: a `string` feature.
103
+ - `choices`: a `list` of `string` features.
104
+ - `answer`: a `string` feature.
105
+ - `abstractive_explanation`: a `string` feature.
106
+ - `extractive_explanation`: a `string` feature.
107
+
108
+ #### v1.11
109
+ - `id`: a `string` feature.
110
+ - `question`: a `string` feature.
111
+ - `choices`: a `list` of `string` features.
112
+ - `answer`: a `string` feature.
113
+ - `abstractive_explanation`: a `string` feature.
114
+ - `extractive_explanation`: a `string` feature.
115
+
116
+ ### [Data Splits Sample Size](#data-splits-sample-size)
117
+
118
+ |name |train|validation|
119
+ |-----|----:|---------:|
120
+ |v1.0 | 7610| 950|
121
+ |v1.11| 9741| 1221|
122
+
123
+ ## [Dataset Creation](#dataset-creation)
124
+
125
+ ### [Curation Rationale](#curation-rationale)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ### [Source Data](#source-data)
130
+
131
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
+
133
+ ### [Annotations](#annotations)
134
+
135
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+
137
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
142
+
143
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
144
+
145
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
+
147
+ ### [Discussion of Biases](#discussion-of-biases)
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ### [Other Known Limitations](#other-known-limitations)
152
+
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+
155
+ ## [Additional Information](#additional-information)
156
+
157
+ ### [Dataset Curators](#dataset-curators)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Licensing Information](#licensing-information)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ### [Citation Information](#citation-information)
166
+
167
+ ```
168
+
169
+ @inproceedings{rajani2019explain,
170
+ title = "Explain Yourself! Leveraging Language models for Commonsense Reasoning",
171
+ author = "Rajani, Nazneen Fatema and
172
+ McCann, Bryan and
173
+ Xiong, Caiming and
174
+ Socher, Richard",
175
+ year="2019",
176
+ booktitle = "Proceedings of the 2019 Conference of the Association for Computational Linguistics (ACL2019)",
177
+ url ="https://arxiv.org/abs/1906.02361"
178
+ }
179
+
180
+ ```
181
+
182
+
183
+ ### Contributions
184
+
185
+ Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.