system HF staff commited on
Commit
afd1048
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +165 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. has_part.py +118 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-Generics-KB
16
+ task_categories:
17
+ - text-scoring
18
+ task_ids:
19
+ - text-scoring-other-Meronym-Prediction
20
+ ---
21
+
22
+ # Dataset Card for [HasPart]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for [HasPart]](#dataset-card-for-haspart)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** https://allenai.org/data/haspartkb
56
+ - **Repository:**
57
+ - **Paper:** https://arxiv.org/abs/2006.07510
58
+ - **Leaderboard:**
59
+ - **Point of Contact:** Peter Clark <peterc@allenai.org>
60
+
61
+ ### Dataset Summary
62
+
63
+ This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ Text Classification / Scoring - meronyms (e.g., `plant` has part `stem`)
68
+
69
+ ### Languages
70
+
71
+ English
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ [More Information Needed]
78
+ ```
79
+ {'arg1': 'plant',
80
+ 'arg2': 'stem',
81
+ 'score': 0.9991798414303377,
82
+ 'synset': ['wn.plant.n.02', 'wn.stalk.n.02'],
83
+ 'wikipedia_primary_page': ['Plant']}
84
+
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ - `arg1`, `arg2`: These are the entities of the meronym, i.e., `arg1` _has\_part_ `arg2`
90
+ - `score`: Meronymic score per the procedure described below
91
+ - `synset`: Ontological classification from WordNet for the two entities
92
+ - `wikipedia_primary_page`: Wikipedia page of the entities
93
+
94
+ **Note**: some examples contain synset / wikipedia info for only one of the entities.
95
+
96
+ ### Data Splits
97
+
98
+ Single training file
99
+
100
+ ## Dataset Creation
101
+
102
+ Our approach to hasPart extraction has five steps:
103
+
104
+ 1. Collect generic sentences from a large corpus
105
+ 2. Train and apply a RoBERTa model to identify hasPart relations in those sentences
106
+ 3. Normalize the entity names
107
+ 4. Aggregate and filter the entries
108
+ 5. Link the hasPart arguments to Wikipedia pages and WordNet senses
109
+
110
+ Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use **GenericsKB**, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.
111
+
112
+ ### Annotations
113
+
114
+ #### Annotation process
115
+
116
+ For each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:
117
+
118
+ > `[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to
119
+ breathe in water.`
120
+
121
+ where `[ARG1/2-B/E]` are special tokens denoting the argument boundaries. The `[CLS]` token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the
122
+ model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.
123
+
124
+ #### Who are the annotators?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Personal and Sensitive Information
129
+
130
+ [More Information Needed]
131
+
132
+ ## Considerations for Using the Data
133
+
134
+ ### Social Impact of Dataset
135
+
136
+ [More Information Needed]
137
+
138
+ ### Discussion of Biases
139
+
140
+ [More Information Needed]
141
+
142
+ ### Other Known Limitations
143
+
144
+ [More Information Needed]
145
+
146
+ ## Additional Information
147
+
148
+ ### Dataset Curators
149
+
150
+ [More Information Needed]
151
+
152
+ ### Licensing Information
153
+
154
+ [More Information Needed]
155
+
156
+ ### Citation Information
157
+
158
+ @misc{bhakthavatsalam2020dogs,
159
+ title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
160
+ author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
161
+ year={2020},
162
+ eprint={2006.07510},
163
+ archivePrefix={arXiv},
164
+ primaryClass={cs.CL}
165
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old\u2019s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.\n", "citation": "@misc{bhakthavatsalam2020dogs,\n title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},\n author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},\n year={2020},\n eprint={2006.07510},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://allenai.org/data/haspartkb", "license": "", "features": {"arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "score": {"dtype": "float64", "id": null, "_type": "Value"}, "wikipedia_primary_page": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "synset": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "has_part", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4363417, "num_examples": 49848, "dataset_name": "has_part"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G": {"num_bytes": 7437382, "checksum": "cc38fd2b464bc45c05a6a31162801bc1b3e6a6be43bb4293b53c102e03d27193"}}, "download_size": 7437382, "post_processing_size": null, "dataset_size": 4363417, "size_in_bytes": 11800799}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3aeb01fef67bf1064753d69e3b8ffea6e5e3767962884ff2fba1d0e035d25f55
3
+ size 623
has_part.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import ast
20
+ from collections import defaultdict
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @misc{bhakthavatsalam2020dogs,
27
+ title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
28
+ author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
29
+ year={2020},
30
+ eprint={2006.07510},
31
+ archivePrefix={arXiv},
32
+ primaryClass={cs.CL}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.
38
+ """
39
+
40
+ _HOMEPAGE = "https://allenai.org/data/haspartkb"
41
+
42
+ _LICENSE = ""
43
+
44
+
45
+ TSV_ID = "1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G"
46
+ FOLDER_ID = "1NzjXX46NnpxtgxBrkBWFiUbsXAMdd-lB"
47
+ ID = TSV_ID
48
+
49
+ _URL = f"https://drive.google.com/uc?export=download&id={ID}"
50
+
51
+
52
+ class HasPart(datasets.GeneratorBasedBuilder):
53
+ def _info(self):
54
+ features = datasets.Features(
55
+ {
56
+ "arg1": datasets.features.Value("string"),
57
+ "arg2": datasets.features.Value("string"),
58
+ "score": datasets.features.Value("float64"),
59
+ "wikipedia_primary_page": datasets.features.Sequence(datasets.features.Value("string")),
60
+ "synset": datasets.features.Sequence(datasets.features.Value("string")),
61
+ }
62
+ )
63
+
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=features,
67
+ supervised_keys=None,
68
+ homepage=_HOMEPAGE,
69
+ license=_LICENSE,
70
+ citation=_CITATION,
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ """Returns SplitGenerators."""
75
+
76
+ dl_fp = dl_manager.download_and_extract(_URL)
77
+
78
+ return [
79
+ datasets.SplitGenerator(
80
+ name=datasets.Split.TRAIN,
81
+ gen_kwargs={
82
+ "input_file": dl_fp,
83
+ "split": "train",
84
+ },
85
+ ),
86
+ ]
87
+
88
+ def _parse_metadata(self, md):
89
+ """metadata is a list of dicts in the tsv file, hence needs to be parsed using
90
+ ast.literal_eval to convert to python objects.
91
+
92
+ Note that metadata resulting in parsing error will be skipped
93
+ """
94
+ md = ast.literal_eval(md)
95
+ dd = defaultdict(list)
96
+
97
+ for entry in md:
98
+ try:
99
+ for k, v in entry.items():
100
+ dd[k].append(v)
101
+ except AttributeError:
102
+ continue
103
+ return dd
104
+
105
+ def _generate_examples(self, input_file, split):
106
+ """ Yields examples. """
107
+ with open(input_file, encoding="utf-8") as f:
108
+ for id_, line in enumerate(f):
109
+ _, arg1, arg2, score, metadata = line.split("\t")
110
+ metadata = self._parse_metadata(metadata)
111
+ example = {
112
+ "arg1": arg1,
113
+ "arg2": arg2,
114
+ "score": float(score),
115
+ "wikipedia_primary_page": metadata["wikipedia_primary_page"],
116
+ "synset": metadata["synset"],
117
+ }
118
+ yield id_, example