Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
58a55c8
1 Parent(s): 3d1fe33

Update parquet files

Browse files
README.md DELETED
@@ -1,291 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - other
18
- task_ids: []
19
- paperswithcode_id: ascentkb
20
- pretty_name: Ascent KB
21
- tags:
22
- - knowledge-base
23
- dataset_info:
24
- - config_name: canonical
25
- features:
26
- - name: arg1
27
- dtype: string
28
- - name: rel
29
- dtype: string
30
- - name: arg2
31
- dtype: string
32
- - name: support
33
- dtype: int64
34
- - name: facets
35
- list:
36
- - name: value
37
- dtype: string
38
- - name: type
39
- dtype: string
40
- - name: support
41
- dtype: int64
42
- - name: source_sentences
43
- list:
44
- - name: text
45
- dtype: string
46
- - name: source
47
- dtype: string
48
- splits:
49
- - name: train
50
- num_bytes: 2976697816
51
- num_examples: 8904060
52
- download_size: 710727536
53
- dataset_size: 2976697816
54
- - config_name: open
55
- features:
56
- - name: subject
57
- dtype: string
58
- - name: predicate
59
- dtype: string
60
- - name: object
61
- dtype: string
62
- - name: support
63
- dtype: int64
64
- - name: facets
65
- list:
66
- - name: value
67
- dtype: string
68
- - name: type
69
- dtype: string
70
- - name: support
71
- dtype: int64
72
- - name: source_sentences
73
- list:
74
- - name: text
75
- dtype: string
76
- - name: source
77
- dtype: string
78
- splits:
79
- - name: train
80
- num_bytes: 2882678298
81
- num_examples: 8904060
82
- download_size: 710727536
83
- dataset_size: 2882678298
84
- ---
85
-
86
- # Dataset Card for Ascent KB
87
-
88
- ## Table of Contents
89
- - [Dataset Description](#dataset-description)
90
- - [Dataset Summary](#dataset-summary)
91
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
92
- - [Languages](#languages)
93
- - [Dataset Structure](#dataset-structure)
94
- - [Data Instances](#data-instances)
95
- - [Data Fields](#data-fields)
96
- - [Data Splits](#data-splits)
97
- - [Dataset Creation](#dataset-creation)
98
- - [Curation Rationale](#curation-rationale)
99
- - [Source Data](#source-data)
100
- - [Annotations](#annotations)
101
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
102
- - [Considerations for Using the Data](#considerations-for-using-the-data)
103
- - [Social Impact of Dataset](#social-impact-of-dataset)
104
- - [Discussion of Biases](#discussion-of-biases)
105
- - [Other Known Limitations](#other-known-limitations)
106
- - [Additional Information](#additional-information)
107
- - [Dataset Curators](#dataset-curators)
108
- - [Licensing Information](#licensing-information)
109
- - [Citation Information](#citation-information)
110
- - [Contributions](#contributions)
111
-
112
- ## Dataset Description
113
-
114
- - **Homepage:** https://ascent.mpi-inf.mpg.de/
115
- - **Repository:** https://github.com/phongnt570/ascent
116
- - **Paper:** https://arxiv.org/abs/2011.00905
117
- - **Point of Contact:** http://tuan-phong.com
118
-
119
- ### Dataset Summary
120
-
121
- This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
122
- The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.
123
- The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).
124
-
125
- For more details, take a look at
126
- [the research paper](https://arxiv.org/abs/2011.00905) and
127
- [the website](https://ascent.mpi-inf.mpg.de).
128
-
129
- ### Supported Tasks and Leaderboards
130
-
131
- The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.
132
-
133
- ### Languages
134
-
135
- The dataset is in English.
136
-
137
- ## Dataset Structure
138
-
139
- ### Data Instances
140
- There are two configurations available for this dataset:
141
- 1. `canonical` (default): This part contains `<arg1 ; rel ; arg2>`
142
- assertions where the relations (`rel`) were mapped to
143
- [ConceptNet relations](https://github.com/commonsense/conceptnet5/wiki/Relations)
144
- with slight modifications:
145
- - Introducing 2 new relations: `/r/HasSubgroup`, `/r/HasAspect`.
146
- - All `/r/HasA` relations were replaced with `/r/HasAspect`.
147
- This is motivated by the [ATOMIC-2020](https://allenai.org/data/atomic-2020)
148
- schema, although they grouped all `/r/HasA` and
149
- `/r/HasProperty` into `/r/HasProperty`.
150
- - The `/r/UsedFor` relation was replaced with `/r/ObjectUse`
151
- which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.).
152
- This is also taken from ATOMIC-2020.
153
- 2. `open`: This part contains open assertions of the form
154
- `<subject ; predicate ; object>` extracted directly from web
155
- contents. This is the original form of the `canonical` triples.
156
-
157
- In both configurations, each assertion is equipped with
158
- extra information including: a set of semantic `facets`
159
- (e.g., *LOCATION*, *TEMPORAL*, etc.), its `support` (i.e., number of occurrences),
160
- and a list of `source_sentences`.
161
-
162
- An example row in the `canonical` configuration:
163
-
164
- ```JSON
165
- {
166
- "arg1": "elephant",
167
- "rel": "/r/HasProperty",
168
- "arg2": "intelligent",
169
- "support": 15,
170
- "facets": [
171
- {
172
- "value": "extremely",
173
- "type": "DEGREE",
174
- "support": 11
175
- }
176
- ],
177
- "source_sentences": [
178
- {
179
- "text": "Elephants are extremely intelligent animals.",
180
- "source": "https://www.softschools.com/facts/animals/asian_elephant_facts/2310/"
181
- },
182
- {
183
- "text": "Elephants are extremely intelligent creatures and an elephant's brain can weigh as much as 4-6 kg.",
184
- "source": "https://www.elephantsforafrica.org/elephant-facts/"
185
- }
186
- ]
187
- }
188
- ```
189
-
190
- ### Data Fields
191
-
192
- - **For `canonical` configuration**
193
- - `arg1`: the first argument to the relationship, e.g., *elephant*
194
- - `rel`: the canonical relation, e.g., */r/HasProperty*
195
- - `arg2`: the second argument to the relationship, e.g., *intelligence*
196
- - `support`: the number of occurrences of the assertion, e.g., *15*
197
- - `facets`: an array of semantic facets, each contains
198
- - `value`: facet value, e.g., *extremely*
199
- - `type`: facet type, e.g., *DEGREE*
200
- - `support`: the number of occurrences of the facet, e.g., *11*
201
- - `source_sentences`: an array of source sentences from which the assertion was
202
- extracted, each contains
203
- - `text`: the raw text of the sentence
204
- - `source`: the URL to its parent document
205
-
206
- - **For `open` configuration**
207
- - The fields of this configuration are the same as the `canonical`
208
- configuration's, except that
209
- the (`arg1`, `rel`, `arg2`) fields are replaced with the
210
- (`subject`, `predicate`, `object`) fields
211
- which are free
212
- text phrases extracted directly from the source sentences
213
- using an Open Information Extraction (OpenIE) tool.
214
-
215
- ### Data Splits
216
-
217
- There are no splits. All data points come to a default split called `train`.
218
-
219
- ## Dataset Creation
220
-
221
- ### Curation Rationale
222
-
223
- The commonsense knowledge base was created to assist in development of robust and reliable AI.
224
-
225
- ### Source Data
226
-
227
- #### Initial Data Collection and Normalization
228
-
229
- Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.
230
- The assertions were then grouped into semantically equivalent clusters.
231
- Take a look at the research paper for more details.
232
-
233
- #### Who are the source language producers?
234
-
235
- Web users.
236
-
237
- ### Annotations
238
-
239
- #### Annotation process
240
-
241
- None.
242
-
243
- #### Who are the annotators?
244
-
245
- None.
246
-
247
- ### Personal and Sensitive Information
248
-
249
- Unknown.
250
-
251
- ## Considerations for Using the Data
252
-
253
- ### Social Impact of Dataset
254
-
255
- [Needs More Information]
256
-
257
- ### Discussion of Biases
258
-
259
- [Needs More Information]
260
-
261
- ### Other Known Limitations
262
-
263
- [Needs More Information]
264
-
265
- ## Additional Information
266
-
267
- ### Dataset Curators
268
-
269
- The knowledge base has been developed by researchers at the
270
- [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
271
-
272
- Contact [Tuan-Phong Nguyen](http://tuan-phong.com) in case of questions and comments.
273
-
274
- ### Licensing Information
275
-
276
- [The Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/)
277
-
278
- ### Citation Information
279
-
280
- ```
281
- @InProceedings{nguyen2021www,
282
- title={Advanced Semantics for Commonsense Knowledge Extraction},
283
- author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
284
- year={2021},
285
- booktitle={The Web Conference 2021},
286
- }
287
- ```
288
-
289
- ### Contributions
290
-
291
- Thanks to [@phongnt570](https://github.com/phongnt570) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ascent_kb.py DELETED
@@ -1,147 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Ascent KB: A Deep Commonsense Knowledge Base"""
16
-
17
- import json
18
-
19
- import datasets
20
-
21
-
22
- _CITATION = """\
23
- @InProceedings{nguyen2021www,
24
- title={Advanced Semantics for Commonsense Knowledge Extraction},
25
- author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
26
- year={2021},
27
- booktitle={The Web Conference 2021},
28
- }
29
- """
30
-
31
- _DESCRIPTION = """\
32
- This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline (https://ascent.mpi-inf.mpg.de/).
33
- """
34
-
35
- _HOMEPAGE = "https://ascent.mpi-inf.mpg.de/"
36
-
37
- _LICENSE = "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/"
38
-
39
- # The HuggingFace dataset library don't host the datasets but only point to the original files
40
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
41
-
42
- _URL = "https://nextcloud.mpi-klsb.mpg.de/index.php/s/dFLdTQHqiFrt3Q3/download"
43
-
44
-
45
- # DONE: Name of the dataset usually match the script name with CamelCase instead of snake_case
46
- class AscentKB(datasets.GeneratorBasedBuilder):
47
- """Ascent KB: A Deep Commonsense Knowledge Base. Version 1.0.0."""
48
-
49
- VERSION = datasets.Version("1.0.0")
50
-
51
- BUILDER_CONFIGS = [
52
- datasets.BuilderConfig(
53
- name="canonical",
54
- version=VERSION,
55
- description="This KB contains <arg1 ; rel ; arg2> \
56
- assertions where relations are canonicalized based on ConceptNet relations.",
57
- ),
58
- datasets.BuilderConfig(
59
- name="open",
60
- version=VERSION,
61
- description="This KB contains open assertions of the form \
62
- <subject ; predicate ; object> extracted directly from web contents.",
63
- ),
64
- ]
65
-
66
- DEFAULT_CONFIG_NAME = "canonical"
67
-
68
- def _info(self):
69
- if self.config.name == "canonical":
70
- features = datasets.Features(
71
- {
72
- "arg1": datasets.Value("string"),
73
- "rel": datasets.Value("string"),
74
- "arg2": datasets.Value("string"),
75
- "support": datasets.Value("int64"),
76
- "facets": [
77
- {
78
- "value": datasets.Value("string"),
79
- "type": datasets.Value("string"),
80
- "support": datasets.Value("int64"),
81
- }
82
- ],
83
- "source_sentences": [{"text": datasets.Value("string"), "source": datasets.Value("string")}],
84
- }
85
- )
86
- else: # features for the "open" part
87
- features = datasets.Features(
88
- {
89
- "subject": datasets.Value("string"),
90
- "predicate": datasets.Value("string"),
91
- "object": datasets.Value("string"),
92
- "support": datasets.Value("int64"),
93
- "facets": [
94
- {
95
- "value": datasets.Value("string"),
96
- "type": datasets.Value("string"),
97
- "support": datasets.Value("int64"),
98
- }
99
- ],
100
- "source_sentences": [{"text": datasets.Value("string"), "source": datasets.Value("string")}],
101
- }
102
- )
103
- return datasets.DatasetInfo(
104
- description=_DESCRIPTION,
105
- features=features,
106
- supervised_keys=None,
107
- homepage=_HOMEPAGE,
108
- license=_LICENSE,
109
- citation=_CITATION,
110
- )
111
-
112
- def _split_generators(self, dl_manager):
113
- """Returns SplitGenerators."""
114
- # my_urls = _URLs[self.config.name]
115
- # data_file = dl_manager.download_and_extract(my_urls)
116
-
117
- data_file = dl_manager.download_and_extract(_URL)
118
-
119
- return [
120
- datasets.SplitGenerator(
121
- name=datasets.Split.TRAIN,
122
- gen_kwargs={
123
- "filepath": data_file,
124
- "split": "train",
125
- },
126
- ),
127
- ]
128
-
129
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
130
- def _generate_examples(self, filepath, split):
131
- """Yields examples as (key, example) tuples."""
132
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
133
- # The `key` is here for legacy reason (tfds) and is not important in itself.
134
-
135
- with open(filepath, encoding="utf-8") as f:
136
- for id_, row in enumerate(f):
137
- data = json.loads(row)
138
- if self.config.name == "canonical":
139
- data.pop("subject")
140
- data.pop("predicate")
141
- data.pop("object")
142
- yield id_, data
143
- else: # "open"
144
- data.pop("arg1")
145
- data.pop("rel")
146
- data.pop("arg2")
147
- yield id_, data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"canonical": {"description": "This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline (https://ascent.mpi-inf.mpg.de/).\n", "citation": "@InProceedings{nguyen2021www,\n title={Advanced Semantics for Commonsense Knowledge Extraction},\n author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},\n year={2021},\n booktitle={The Web Conference 2021},\n}\n", "homepage": "https://ascent.mpi-inf.mpg.de/", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"arg1": {"dtype": "string", "id": null, "_type": "Value"}, "rel": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "support": {"dtype": "int64", "id": null, "_type": "Value"}, "facets": [{"value": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}, "support": {"dtype": "int64", "id": null, "_type": "Value"}}], "source_sentences": [{"text": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "builder_name": "ascent_kb", "config_name": "canonical", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2976697816, "num_examples": 8904060, "dataset_name": "ascent_kb"}}, "download_checksums": {"https://nextcloud.mpi-klsb.mpg.de/index.php/s/dFLdTQHqiFrt3Q3/download": {"num_bytes": 710727536, "checksum": "51fd88a07bca4fa48a9157dd1d93d9bac88ad2b38b5eae662d2cbfad47895016"}}, "download_size": 710727536, "post_processing_size": null, "dataset_size": 2976697816, "size_in_bytes": 3687425352}, "open": {"description": "This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline (https://ascent.mpi-inf.mpg.de/).\n", "citation": "@InProceedings{nguyen2021www,\n title={Advanced Semantics for Commonsense Knowledge Extraction},\n author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},\n year={2021},\n booktitle={The Web Conference 2021},\n}\n", "homepage": "https://ascent.mpi-inf.mpg.de/", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"subject": {"dtype": "string", "id": null, "_type": "Value"}, "predicate": {"dtype": "string", "id": null, "_type": "Value"}, "object": {"dtype": "string", "id": null, "_type": "Value"}, "support": {"dtype": "int64", "id": null, "_type": "Value"}, "facets": [{"value": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}, "support": {"dtype": "int64", "id": null, "_type": "Value"}}], "source_sentences": [{"text": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "builder_name": "ascent_kb", "config_name": "open", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2882678298, "num_examples": 8904060, "dataset_name": "ascent_kb"}}, "download_checksums": {"https://nextcloud.mpi-klsb.mpg.de/index.php/s/dFLdTQHqiFrt3Q3/download": {"num_bytes": 710727536, "checksum": "51fd88a07bca4fa48a9157dd1d93d9bac88ad2b38b5eae662d2cbfad47895016"}}, "download_size": 710727536, "post_processing_size": null, "dataset_size": 2882678298, "size_in_bytes": 3593405834}}
 
 
open/ascent_kb-train-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc83d2718d8b5d49343e64dbf9711fff1ebf6e7ee4b143421a69d574a26b190b
3
+ size 155062706
open/ascent_kb-train-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d55108120a984d024a2752c8a38cee42245740e478daf66693fa60cf0b1efb7
3
+ size 155833213
open/ascent_kb-train-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:218d27ded07087d355fded2d021b27b41d756d62bed6b237ee2695c9905a1f8c
3
+ size 156709991
open/ascent_kb-train-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa3381ebd765830ca3ce18ab920f8e17958fe9e1e3ac9094fd8b6449d96b0e15
3
+ size 155763376
open/ascent_kb-train-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c336b16224dcf73576e727cbe2fd3be0e4620c0ca2338818430d18d2546605e
3
+ size 156807239
open/ascent_kb-train-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e17466ce7ad2ce8bd75c898c57891392dc30b83ceaf6a5b3b2cc79e4ba25502
3
+ size 119866758