system HF staff commited on
Commit
a929286
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +184 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. numer_sense.py +85 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - slot-filling
20
+ ---
21
+
22
+ # Dataset Card for [Dataset Name]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://inklab.usc.edu/NumerSense/
50
+ - **Repository:** https://github.com/INK-USC/NumerSense
51
+ - **Paper:** https://arxiv.org/abs/2005.00683
52
+ - **Leaderboard:** https://inklab.usc.edu/NumerSense/#exp
53
+ - **Point of Contact:** Author emails listed in [paper](https://arxiv.org/abs/2005.00683)
54
+
55
+ ### Dataset Summary
56
+
57
+ NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145
58
+ masked-word-prediction probes. The general idea is to mask numbers between 0-10 in sentences mined from a commonsense
59
+ corpus and evaluate whether a language model can correctly predict the masked value.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ The dataset supports the task of slot-filling, specifically as an evaluation of numerical common sense. A leaderboard
64
+ is included on the [dataset webpage](https://inklab.usc.edu/NumerSense/#exp) with included benchmarks for GPT-2,
65
+ RoBERTa, BERT, and human performance. Leaderboards are included for both the core set and the adversarial set
66
+ discussed below.
67
+
68
+ ### Languages
69
+
70
+ This dataset is in English.
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ Each instance consists of a sentence with a masked numerical value between 0-10 and (in the train set) a target.
77
+ Example from the training set:
78
+
79
+ ```
80
+ sentence: Black bears are about <mask> metres tall.
81
+ target: two
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ Each value of the training set consists of:
87
+ - `sentence`: The sentence with a number masked out with the `<mask>` token.
88
+ - `target`: The ground truth target value. Since the test sets do not include the ground truth, the `target` field
89
+ values are empty strings in the `test_core` and `test_all` splits.
90
+
91
+ ### Data Splits
92
+
93
+ The dataset includes the following pre-defined data splits:
94
+
95
+ - A train set with >10K labeled examples (i.e. containing a ground truth value)
96
+ - A core test set (`test_core`) with 1,132 examples (no ground truth provided)
97
+ - An expanded test set (`test_all`) encompassing `test_core` with the addition of adversarial examples for a total of
98
+ 3,146 examples. See section 2.2 of [the paper] for a discussion of how these examples are constructed.
99
+
100
+ ## Dataset Creation
101
+
102
+ ### Curation Rationale
103
+
104
+ The purpose of this dataset is "to study whether PTLMs capture numerical commonsense knowledge, i.e., commonsense
105
+ knowledge that provides an understanding of the numeric relation between entities." This work is motivated by the
106
+ prior research exploring whether language models possess _commonsense knowledge_.
107
+
108
+ ### Source Data
109
+
110
+ #### Initial Data Collection and Normalization
111
+
112
+ The dataset is an extension of the [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense)
113
+ corpus. A query was performed to discover sentences containing numbers between 0-12, after which the resulting
114
+ sentences were manually evaluated for inaccuracies, typos, and the expression of commonsense knowledge. The numerical
115
+ values were then masked.
116
+
117
+ #### Who are the source language producers?
118
+
119
+ The [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense) corpus, from which this dataset
120
+ is sourced, is a crowdsourced dataset maintained by the MIT Media Lab.
121
+
122
+ ### Annotations
123
+
124
+ #### Annotation process
125
+
126
+ No annotations are present in this dataset beyond the `target` values automatically sourced from the masked
127
+ sentences, as discussed above.
128
+
129
+ #### Who are the annotators?
130
+
131
+ The curation and inspection was done in two rounds by graduate students.
132
+
133
+ ### Personal and Sensitive Information
134
+
135
+ [More Information Needed]
136
+
137
+ ## Considerations for Using the Data
138
+
139
+ ### Social Impact of Dataset
140
+
141
+ The motivation of measuring a model's ability to associate numerical values with real-world concepts appears
142
+ relatively innocuous. However, as discussed in the following section, the source dataset may well have biases encoded
143
+ from crowdworkers, particularly in terms of factoid coverage. A model's ability to perform well on this benchmark
144
+ should therefore not be considered evidence that it is more unbiased or objective than a human performing similar
145
+ tasks.
146
+
147
+ [More Information Needed]
148
+
149
+ ### Discussion of Biases
150
+
151
+ This dataset is sourced from a crowdsourced commonsense knowledge base. While the information contained in the graph
152
+ is generally considered to be of high quality, the coverage is considered to very low as a representation of all
153
+ possible commonsense knowledge. The representation of certain factoids may also be skewed by the demographics of the
154
+ crowdworkers. As one possible example, the term "homophobia" is connected with "Islam" in the ConceptNet knowledge
155
+ base, but not with any other religion or group, possibly due to the biases of crowdworkers contributing to the
156
+ project.
157
+
158
+ ### Other Known Limitations
159
+
160
+ [More Information Needed]
161
+
162
+ ## Additional Information
163
+
164
+ ### Dataset Curators
165
+
166
+ This dataset was collected by Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren, Computer Science researchers
167
+ at the at the University of Southern California.
168
+
169
+ ### Licensing Information
170
+
171
+ The data is hosted in a GitHub repositor with the
172
+ [MIT License](https://github.com/INK-USC/NumerSense/blob/main/LICENSE).
173
+
174
+ ### Citation Information
175
+
176
+ ```
177
+ @inproceedings{lin2020numersense,
178
+ title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
179
+ author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren},
180
+ booktitle={Proceedings of EMNLP},
181
+ year={2020},
182
+ note={to appear}
183
+ }
184
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes.\n\nWe propose to study whether numerical commonsense knowledge can be induced from pre-trained language models like BERT, and to what extent this access to knowledge robust against adversarial examples is. We hope this will be beneficial for tasks such as knowledge base completion and open-domain question answering.\n", "citation": "@inproceedings{lin2020numersense,\n title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},\n author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren}, \n booktitle={Proceedings of EMNLP},\n year={2020},\n note={to appear}\n}\n", "homepage": "https://inklab.usc.edu/NumerSense/", "license": "", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "target": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "numer_sense", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 825865, "num_examples": 10444, "dataset_name": "numer_sense"}, "test_core": {"name": "test_core", "num_bytes": 62652, "num_examples": 1132, "dataset_name": "numer_sense"}, "test_all": {"name": "test_all", "num_bytes": 184180, "num_examples": 3146, "dataset_name": "numer_sense"}}, "download_checksums": {"https://raw.githubusercontent.com/INK-USC/NumerSense/main/data/train.masked.tsv": {"num_bytes": 763185, "checksum": "34cd706f4070907b8a9fa7200504bea099a6f34a343e282ae4f3a987ecd63d95"}, "https://raw.githubusercontent.com/INK-USC/NumerSense/main/data/test.core.masked.txt": {"num_bytes": 56983, "checksum": "ed8abedebf6875085619db2c1b966da63409a9b3d0ee3c8f1b1a6a6bcfe0d094"}, "https://raw.githubusercontent.com/INK-USC/NumerSense/main/data/test.all.masked.txt": {"num_bytes": 165295, "checksum": "eefc2649b8dd679d2722d41494b01821be5d76d45ba058c0bf71840fa353a89b"}}, "download_size": 985463, "post_processing_size": null, "dataset_size": 1072697, "size_in_bytes": 2058160}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:313b2e4a18f140b28d3066c7eccff497eced2337dab893bfb5328bc6fc500df5
3
+ size 1045
numer_sense.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """The NumerSense Dataset"""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import csv
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{lin2020numersense,
27
+ title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
28
+ author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren},
29
+ booktitle={Proceedings of EMNLP},
30
+ year={2020},
31
+ note={to appear}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes.
37
+
38
+ We propose to study whether numerical commonsense knowledge can be induced from pre-trained language models like BERT, and to what extent this access to knowledge robust against adversarial examples is. We hope this will be beneficial for tasks such as knowledge base completion and open-domain question answering.
39
+ """
40
+
41
+ _HOMEPAGE_URL = "https://inklab.usc.edu/NumerSense/"
42
+ _BASE_DOWNLOAD_URL = "https://raw.githubusercontent.com/INK-USC/NumerSense/main/data/"
43
+
44
+
45
+ class NumerSense(datasets.GeneratorBasedBuilder):
46
+ """The Multilingual Amazon Reviews Corpus"""
47
+
48
+ def _info(self):
49
+ return datasets.DatasetInfo(
50
+ description=_DESCRIPTION,
51
+ features=datasets.Features(
52
+ {
53
+ "sentence": datasets.Value("string"),
54
+ "target": datasets.Value("string"),
55
+ }
56
+ ),
57
+ supervised_keys=None,
58
+ homepage=_HOMEPAGE_URL,
59
+ citation=_CITATION,
60
+ )
61
+
62
+ def _split_generators(self, dl_manager):
63
+ train_url = _BASE_DOWNLOAD_URL + "train.masked.tsv"
64
+ test_core_url = _BASE_DOWNLOAD_URL + "test.core.masked.txt"
65
+ test_all_url = _BASE_DOWNLOAD_URL + "test.all.masked.txt"
66
+
67
+ train_path = dl_manager.download_and_extract(train_url)
68
+ test_core_path = dl_manager.download_and_extract(test_core_url)
69
+ test_all_path = dl_manager.download_and_extract(test_all_url)
70
+
71
+ return [
72
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_path, "is_test": False}),
73
+ datasets.SplitGenerator(name="test_core", gen_kwargs={"file_path": test_core_path, "is_test": True}),
74
+ datasets.SplitGenerator(name="test_all", gen_kwargs={"file_path": test_all_path, "is_test": True}),
75
+ ]
76
+
77
+ def _generate_examples(self, file_path, is_test):
78
+ with open(file_path, "r", encoding="utf-8") as f:
79
+ if is_test:
80
+ for i, sentence in enumerate(f):
81
+ yield i, {"sentence": sentence.rstrip(), "target": ""}
82
+ else:
83
+ reader = csv.DictReader(f, delimiter="\t", fieldnames=["sentence", "target"])
84
+ for i, row in enumerate(reader):
85
+ yield i, row