Datasets:

Sub-tasks:
text-scoring
Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
machine-generated
ArXiv:
Tags:
Meronym-Prediction
License:
system HF staff commited on
Commit
1529ded
1 Parent(s): 7f7a4b9

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

Files changed (1) hide show
  1. has_part.py +0 -1
has_part.py CHANGED
@@ -14,7 +14,6 @@
14
  # limitations under the License.
15
  """This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet."""
16
 
17
- from __future__ import absolute_import, division, print_function
18
 
19
  import ast
20
  from collections import defaultdict
 
14
  # limitations under the License.
15
  """This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet."""
16
 
 
17
 
18
  import ast
19
  from collections import defaultdict