# Dataset: has_part

Languages: en
Multilinguality: monolingual
Size Categories: 10K<n<100K
Language Creators: found
Annotations Creators: machine-generated

# Dataset Card for [HasPart]

### Dataset Summary

This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.

Text Classification / Scoring - meronyms (e.g., plant has part stem)

English

## Dataset Structure

### Data Instances

{'arg1': 'plant',
'arg2': 'stem',
'score': 0.9991798414303377,
'synset': ['wn.plant.n.02', 'wn.stalk.n.02'],
'wikipedia_primary_page': ['Plant']}


### Data Fields

• arg1, arg2: These are the entities of the meronym, i.e., arg1 has_part arg2
• score: Meronymic score per the procedure described below
• synset: Ontological classification from WordNet for the two entities
• wikipedia_primary_page: Wikipedia page of the entities

Note: some examples contain synset / wikipedia info for only one of the entities.

### Data Splits

Single training file

## Dataset Creation

Our approach to hasPart extraction has five steps:

1. Collect generic sentences from a large corpus
2. Train and apply a RoBERTa model to identify hasPart relations in those sentences
3. Normalize the entity names
4. Aggregate and filter the entries

Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use GenericsKB, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.

### Annotations

#### Annotation process

For each sentence S in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:

[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to breathe in water.

where [ARG1/2-B/E] are special tokens denoting the argument boundaries. The [CLS] token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.