ACL-OCL / Base_JSON /prefixI /json /iwcs /2021.iwcs-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:24:16.591192Z"
},
"title": "Automatic Classification of Attributes in German Adjective-Noun Phrases",
"authors": [
{
"first": "Neele",
"middle": [],
"last": "Falk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "neele.falk@ims.uni-stuttgart.de"
},
{
"first": "Yana",
"middle": [],
"last": "Strakatova",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eva",
"middle": [],
"last": "Huber",
"suffix": "",
"affiliation": {},
"email": "eva.huber@uzh.ch"
},
{
"first": "Erhard",
"middle": [],
"last": "Hinrichs",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u2020",
"middle": [],
"last": "Sfs",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Adjectives such as heavy (as in heavy rain) and windy (as in windy day) provide possible values for the attributes intensity and climate, respectively. The attributes themselves are not overtly realized and are in this sense implicit. While these attributes can be easily inferred by humans, their automatic classification poses a challenging task for computational models. We present the following contributions: (1) We gain new insights into the attribute selection task for German. More specifically, we develop computational models for this task that are able to generalize to unseen data. Moreover, we show that classification accuracy depends, inter alia, on the degree of polysemy of the lexemes involved, on the generalization potential of the training data and on the degree of semantic transparency of the adjective-noun pairs in question. (2) We provide the first resource for computational and linguistic experiments with German adjective-noun pairs that can be used for attribute selection and related tasks. In order to safeguard against unwelcome memorization effects, we present an automatic data augmentation method based on a lexical resource that can increase the size of the training data to a large extent.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Adjectives such as heavy (as in heavy rain) and windy (as in windy day) provide possible values for the attributes intensity and climate, respectively. The attributes themselves are not overtly realized and are in this sense implicit. While these attributes can be easily inferred by humans, their automatic classification poses a challenging task for computational models. We present the following contributions: (1) We gain new insights into the attribute selection task for German. More specifically, we develop computational models for this task that are able to generalize to unseen data. Moreover, we show that classification accuracy depends, inter alia, on the degree of polysemy of the lexemes involved, on the generalization potential of the training data and on the degree of semantic transparency of the adjective-noun pairs in question. (2) We provide the first resource for computational and linguistic experiments with German adjective-noun pairs that can be used for attribute selection and related tasks. In order to safeguard against unwelcome memorization effects, we present an automatic data augmentation method based on a lexical resource that can increase the size of the training data to a large extent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There is ample evidence that humans decompose the meaning of objects and events into a set of prototypical semantic relations and their values. These relations, referred to in different frameworks as attributes (Barsalou, 1992) , frame elements (Fillmore, 1982) , thematic relations (Gruber, 1965) , or thematic roles (Jackendoff, 1972) , serve as an effective means to cluster classes of objects and events by degrees of semantic similarity. For example, thematic roles such as buyer and seller help distinguish among different participants in a financial transaction, and adjectives, such as young and * denotes equal contribution old, group individuals into different equivalence classes for the relation age. Likewise, adjectives such as heavy (as in heavy rain) and windy (as in windy day) provide possible values for the attributes intensity and climate, respectively. The attributes themselves are not overtly realized and are in this sense implicit. While these attributes can be easily inferred by humans, their automatic classification poses a challenging task for computational models, as shown in the recent study by Shwartz and Dagan (2019) for English data. Compared to automatic role assignment for verbal arguments, attribute selection for adjective-noun pairs has received relatively little attention in computational semantics.",
"cite_spans": [
{
"start": 211,
"end": 227,
"text": "(Barsalou, 1992)",
"ref_id": "BIBREF0"
},
{
"start": 245,
"end": 261,
"text": "(Fillmore, 1982)",
"ref_id": "BIBREF6"
},
{
"start": 283,
"end": 297,
"text": "(Gruber, 1965)",
"ref_id": "BIBREF8"
},
{
"start": 318,
"end": 336,
"text": "(Jackendoff, 1972)",
"ref_id": "BIBREF14"
},
{
"start": 1129,
"end": 1153,
"text": "Shwartz and Dagan (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attribute selection is highly relevant in different NLP tasks, such as information retrieval, topic modelling, and sentiment analysis. Consider a sentiment analysis task. If there is positive/negative sentiment expressed about something or someone, it is useful to know what triggers that sentiment. This requires from a system the ability to generalize over specific adjectives to more abstract attributes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) I {like/don't like} her siblings. They are a. {bright/stupid} people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attribute: intelligence b. {friendly/rude} people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For polysemous adjectives, the attribute selection task can be viewed as a coarse-grained word sense disambiguation. For instance, the adjective bright in example (1a) may acquire different meanings when it combines with different nouns, e.g. bright room, where the attribute is not intelligence, but perception.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute: behaviour",
"sec_num": null
},
{
"text": "In this paper, we frame the attribute selection task as a multiclass classification problem. We conduct experiments on the German dataset GerCo (Strakatova et al., 2020) of adjective-noun phrases. To the best of our knowledge, this is the first at-tribute analysis for German. Our main contributions are the following: (1) We gain new insights into the attribute selection task for German. More specifically, we develop computational models for this task that are able to generalize to unseen data. Moreover, we show that classification accuracy depends, inter alia, on the degree of polysemy of the lexemes involved, on the generalization potential of the training data and on the degree of semantic transparency of the adjective-noun pairs in question.",
"cite_spans": [
{
"start": 144,
"end": 169,
"text": "(Strakatova et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute: behaviour",
"sec_num": null
},
{
"text": "(2) We provide the first resource for computational and linguistic experiments with German adjectivenoun pairs that can be used for attribute selection and related tasks. In order to safeguard against unwelcome memorization effects, we present an automatic data augmentation method based on a lexical resource that can increase the size of the training data to a large extent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute: behaviour",
"sec_num": null
},
{
"text": "This paper is structured as follows. We discuss related work in section 2. Section 3 describes the dataset in more detail. In section 4, we present the experiments and their results. Finally, we draw conclusions and give directions for future work in section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute: behaviour",
"sec_num": null
},
{
"text": "Earlier studies of attribute selection focus primarily on English data. Hartung (2015) and Hartung et al. (2017) investigate the attributes in AN phrases and create a dataset for English adjective-noun phrases and their corresponding attributes based on the English WordNet. Hartung et al. (2017) try to model the task of selecting underlying attributes such as age for a phrase such as old car with representation learning: they experiment with different composition models to construct a single vector for the adjective-noun combination from the embeddings of the adjective and the noun. This composed vector is then used as a proxy for the underlying attribute, e.g. age and ranked with possible alternative values for other candidate attributes. Shwartz and Dagan (2019) evaluate different types of word embeddings on a number of lexical semantics tasks, including attribute selection and probe their ability to model lexical composition. For that purpose they reformulate the task of attribute selection into a binary classification: given an adjective-noun pair and an attribute, the classifiers predict whether the target attribute is selected for the pair in question. Their findings on the English dataset reveal that this task remains a challenge for all embedding types, though contextualized embeddings clearly outperform static embeddings.",
"cite_spans": [
{
"start": 72,
"end": 86,
"text": "Hartung (2015)",
"ref_id": "BIBREF10"
},
{
"start": 91,
"end": 112,
"text": "Hartung et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 275,
"end": 296,
"text": "Hartung et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 750,
"end": 774,
"text": "Shwartz and Dagan (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Our work differs this from previous work in several aspects: we create the first dataset for the annotation of attributes in adjective-noun pairs for German. The taxonomy of 16 attributes is not as finegrained as in Hartung (2015) , who distinguishes between 254 attribute labels. Our more compact label set is thus more coarse-grained and more suitable for automatic modeling. We test the automatic models in a multiclass-classification setup with the adjective and noun embedding as input.",
"cite_spans": [
{
"start": 216,
"end": 230,
"text": "Hartung (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Unlike previous work on attribute selection, we take into account whether the semantics of an adjective-noun pair is transparent or not. Since the GerCo dataset contains both collocations and free phrases, we can partition the data accordingly and can compare the results obtained by a given classifier for the two classes. In earlier work (Strakatova et al., 2020) , we report on binary classifiers for collocational and free adjective-noun pairs, which did not include prediction of the target attributes. In the present paper, the relevant attributes are taken into account. Therefore, our research contributes to a growing number of studies of semantic transparency, which up to now have focused on multiword expressions and nominal compounds (Reddy et al., 2011; Bell and Sch\u00e4fer, 2013; Jana et al., 2019; Shwartz and Dagan, 2019) in particular, and extends this body of literature to the empirical domain of adjective-noun pairs. Our ability to distinguish between free phrases and collocations, allows us to test the finding of Espinosa Anke et al. 2019, who show that semantic relations in collocations are more difficult to predict in comparison to other types of relations such as hyponymy, meronymy, etc.",
"cite_spans": [
{
"start": 340,
"end": 365,
"text": "(Strakatova et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 747,
"end": 767,
"text": "(Reddy et al., 2011;",
"ref_id": "BIBREF18"
},
{
"start": 768,
"end": 791,
"text": "Bell and Sch\u00e4fer, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 792,
"end": 810,
"text": "Jana et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 811,
"end": 835,
"text": "Shwartz and Dagan, 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In sum, previous studies confirm that (i) revealing lexical relations in compounds and AN phrases is a challenge in NLP and (ii) relations found in collocations are more difficult to predict than other types of lexical relations. We combine these two findings in our study and model the lexicalsemantic relations, which we call attributes, for both collocations and free phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In our experiments, we use the German dataset of adjective-noun phrases GerCo (Strakatova et al., 2020) which we annotate with additional seman-tic information. 1 This dataset is suitable for our study due to several reasons: (1) it contains highly polysemous adjectives; (2) half of the dataset is represented by collocations; (3) it is based on a lexical resource -the German wordnet GermaNet (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010) which can assist us in augmenting the data and obtaining attribute information about it.",
"cite_spans": [
{
"start": 78,
"end": 103,
"text": "(Strakatova et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 395,
"end": 419,
"text": "(Hamp and Feldweg, 1997;",
"ref_id": "BIBREF9"
},
{
"start": 420,
"end": 447,
"text": "Henrich and Hinrichs, 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The original GerCo dataset contains 3,652 AN phrases manually annotated as \"collocations\" and \"free phrases\". The distinction between the two types is based on the transparency of the adjective in the phrase that is operationalized as literality (Reddy et al., 2011) . For instance, in the phrase grober Sand 'coarse sand', the adjective has its literal sense of \"rough in texture\" -it is annotated as free phrase. In the phrase grober Fehler 'gross mistake', the meaning of the adjective is shifted: it does not describe texture in combination with the noun Fehler 'mistake', but refers to its intensity.",
"cite_spans": [
{
"start": 246,
"end": 266,
"text": "(Reddy et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The adjectives in GerCo have been chosen on the basis of the semantic classes that they are assigned to in GermaNet. The advantage of GermaNet as a lexical resource is that, in contrast to the English WordNet, it models adjectives in a hierarchical manner similarly to nouns and verbs. From each of the 16 semantic classes for German adjectives, three adjectives have been selected. Each adjective is paired with the most frequent co-occurring nouns, thus all adjective-noun pairs in the dataset have a strong association. 2 In the present study, we excluded two relational adjectives from the data: barock 'baroque' and steinig 'stony'. Out of the remaining 46 adjectives, 44 have at least two senses (Strakatova et al., 2020) . The top nodes of the GermaNet hierarchy of adjectives represent the 16 semantic classes and the direct hyponyms of the top nodes represent more fine-grained classes of adjectives. 3 Figure 1 shows a part of the taxonomy for one sense of adjectives tief 'deep' and salzig 'salty'. The top nodes are used as attribute labels to annotate the data (see section 3.1).",
"cite_spans": [
{
"start": 702,
"end": 727,
"text": "(Strakatova et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 912,
"end": 920,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We make use of this hierarchical structure for adjectives in GermaNet in two ways: extracting attribute information (subsection 3.1) and automatic augmentation of the dataset (subsection 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "1 The dataset, the splits and the code for running the models on the data are available at https://github.com/ Blubberli/IWCS-attributes.git 2 Based on the logDice score (Rychly, 2008) ; 75% of the data has a logDice > 4.14.",
"cite_spans": [
{
"start": 170,
"end": 184,
"text": "(Rychly, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "3 Based on the semantic classification of German adjectives proposed by Hundsnurscher and Splett (1982) . ",
"cite_spans": [
{
"start": 72,
"end": 103,
"text": "Hundsnurscher and Splett (1982)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "For the present study, we add two layers of semantic annotation to the GerCo dataset: (1) by manual annotation: word sense IDs in GermaNet for all the adjectives and nouns in the dataset; (2) by automatic annotation: attributes for all the phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold standard",
"sec_num": "3.1"
},
{
"text": "Manual annotation. Manual annotation has been performed by two advanced students of computational linguistics with a solid background in lexical semantics and lexicography. Each adjective and noun from the GerCo dataset has been disambiguated and annotated with the corresponding sense IDs in GermaNet. We need these annotations for two reasons: to obtain attribute information about the phrases and to augment the data automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold standard",
"sec_num": "3.1"
},
{
"text": "Automatic annotation. To add the attribute annotations, we made use of the hierarchical structure of adjectives in GermaNet. Based on the manually annotated sense IDs of the adjectives, we assign an attribute label to each phrase automatically. For instance, tief 'deep/low' in tiefe Stimme 'deep voice' has been annotated with the sense \"having a low pitch\". The top node in the hierarchy for this sense is perception (see figure 1) -the phrase is assigned this label as an attribute. In tiefe Liebe 'deep love', the adjective is annotated with a different sense -\"very strong, intense\", the attribute label for this sense is intensity. Table 1 provides an overview of all the 16 labels with examples from the dataset (codenamed GerCo+).",
"cite_spans": [],
"ref_spans": [
{
"start": 638,
"end": 645,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gold standard",
"sec_num": "3.1"
},
{
"text": "Collocations. Half of the GerCo+ dataset is represented by collocations. Their distribution, however, is not balanced for each attribute. It concurs with the previous observations in literature that certain meanings tend to be expressed collocationally and certain meanings are usually found in free phrases. For instance, intensity is usually expressed in collocations whereas color in free phrases (van der Wouden, 1997) . Figure 2 shows the frequency distribution of collocations and free phrases in GerCo+. Four labels (intensity, relation, manner, feeling) are represented to a large extent by collocations, for perception, substance, on the other hand, the number of free phrases is very high. We expect collocations to be more challenging for the models. Additional adjectives. The number of distinct adjectives in the original GerCo dataset is small. For some attributes (e.g. evaluation), very few adjectives are available. To be able to test each attribute for at least three distinct adjectives, we added 8 adjectives. We manually combined them with suitable nouns from the original dataset and annotated the phrases with the corresponding attributes. The adjectives in the final dataset can select between one and six different attributes (see figure 3) . Most of the adjectives can select more than one attribute: this ambiguity is expected to pose another challenge for the automatic modelling.",
"cite_spans": [
{
"start": 400,
"end": 422,
"text": "(van der Wouden, 1997)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 425,
"end": 433,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1256,
"end": 1265,
"text": "figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Gold standard",
"sec_num": "3.1"
},
{
"text": "Lexical memorization is the tendency of a classifier to memorize the relations between words it has seen in training and corresponding labels (Levy et al., 2015) . The generalisation ability of classifiers and the phenomenon of lexical memorization in classifying lexical inference relations and relations in noun compounds have been investigated by Levy et al. (2015) ; Dima (2016); Shwartz and Waterson (2018). Since the GerCo+ dataset is rather small, the danger of the classifier falling into the trap of lexical memorization effects needs to be safeguarded against. We therefore propose an automatic data augmentation to be able to create different training and test splits: either with modifier overlap, with head overlap or no overlap. We also expect a larger dataset to have positive effects on the precision of the machine-learning models. In order to increase the amount of training data, we perform automatic data augmentation relying on lexical and conceptual relations in GermaNet.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 350,
"end": 368,
"text": "Levy et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic augmentation",
"sec_num": "3.2"
},
{
"text": "In GermaNet, senses of words are grouped into sets of synonyms (synsets). Synsets are connected to each other via conceptual relations, the main type of such relations is hyponymy/hypernymy as in pie\u2192pastry\u2192baked goods. Apart from that, some lexical units are interlinked via lexical relations, such as synonymy and antonymy. Attributes are expected to carry over to adjectives and nouns linked in GermaNet via lexical and conceptual relations. Knowing the sense IDs of all the words in the dataset, we therefore only have to extract the semantically related adjectives and nouns to generate new phrases. The new phrases are annotated automatically with the attribute from the original phrase. For instance, the original dataset contains the phrase tiefer Ton 'low-pitched sound' (collocation) with the attribute perception. Both words are provided with the corresponding sense IDs from GermaNet. The antonym of tief in this sense is hoch 'high-pitched' and a co-hyponym of Ton is Pfeifen 'whistle'. This results in a new phrase hohes Pfeifen 'high-pitched whistle' with the attribute perception.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic augmentation",
"sec_num": "3.2"
},
{
"text": "Further phrases can be extracted via the adjectival top nodes in GermaNet: by combining nonambiguous adjectives under those nodes with nouns that can select the corresponding attribute. Selecting only non-ambiguous adjectives, i.e. only adjectives that select a single possible attribute ensures that the resulting phrases is annotated with the correct attribute. For example, a new phrase for the attribute perception can be constructed by combining the adjective salzig 'salty' which can only express this attribute with other nouns that can have perception, e.g. Suppe 'soup'. We create two augmented datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic augmentation",
"sec_num": "3.2"
},
{
"text": "1. small Augment only the adjectives by adding synonyms, antonyms, direct hypernyms, all hyponyms and co-hyponyms 2. large Augment the adjectives and nouns by adding synonyms, antonyms, direct hypernyms, all hyponyms and co-hyponyms. Augment the attributes by combining all nonambiguous hyponyms with suitable nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic augmentation",
"sec_num": "3.2"
},
{
"text": "In order to eliminate nonsensical phrases, the automatically created AN phrases are filtered by their bigram frequencies (>3) in a large corpus consisting of several German treebanks. 4 Automatically augmented data is expected to be noisy to some extent. To estimate the amount of noise, we randomly extract 100 examples from each augmented dataset and manually assess the 4 T\u00fcBa-D/DP (de Kok and P\u00fctz, 2019) and the corpus DE-COW16AX (Sch\u00e4fer, 2015; Sch\u00e4fer and Bildhauer, 2012) examples and the corresponding attributes. This study of random samples shows that around 20% of the automatically gained data is labeled incorrectly. ",
"cite_spans": [
{
"start": 184,
"end": 185,
"text": "4",
"ref_id": null
},
{
"start": 435,
"end": 450,
"text": "(Sch\u00e4fer, 2015;",
"ref_id": "BIBREF20"
},
{
"start": 451,
"end": 479,
"text": "Sch\u00e4fer and Bildhauer, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic augmentation",
"sec_num": "3.2"
},
{
"text": "We create two test set ups: mixed and balanced. In the mixed setting, we test all the attributes and all the adjectives from the gold standard dataset. In the balanced setting, we use a subset of seven attributes with a balanced distribution of collocations and free phrases to compare the performance on the two types of phrases. The balanced attributes are climate, quantity, time, society, location, behaviour, evaluation. The models are trained on the two automatically augmented datasets: small and large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset splits",
"sec_num": "3.3"
},
{
"text": "We create three splits of validation/test data from the gold standard GerCo+ dataset. Each test set contains roughly 700 phrases. To investigate the role of lexical memorization in the attribute selection task, we create different lexical settings in the training data: (1) No overlap The validation/test and training have distinct vocabulary. (2) Modifier overlap The validation/test and training share modifiers (adjectives). (3) Head overlap The validation/test and training share heads (nouns).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset splits",
"sec_num": "3.3"
},
{
"text": "In the following experiment, we investigate to what extent attribute-selection can be computationally modeled. For that purpose, we use the data described in section 3.3 and train a simple neural network to predict one of the 16 possible attributes given the adjective and noun as input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic classification",
"sec_num": "4"
},
{
"text": "We train a feed-forward non-linear classifier with one hidden layer. For each adjective-noun phrase, we extract the embedding for each constituent and apply a linear transformation to the concatenated input embeddings, followed by a ReLU non-linearity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "We experiment with two different embedding types:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "\u2022 fastText (Bojanowski et al., 2017) noncontextualized German word embeddings with subwords trained on Common Crawl (Grave et al., 2018 ).",
"cite_spans": [
{
"start": 11,
"end": 36,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 116,
"end": 135,
"text": "(Grave et al., 2018",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "\u2022 BERT (Devlin et al., 2019) contextualized embeddings produced by a bidirectional transformer trained on Wikipedia, the EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. 5 We treat the adjective-noun phrase as the context sentence, thus the embedding of the adjective is only contextualized given the noun (and the other way around respectively).",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 196,
"end": 197,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "The size of the hidden layer corresponds to the embedding dimension of one constituent (300 for fastText, 768 for BERT), the output layer has size 16 which corresponds to the number of different attributes. We optimize the cross-entropy loss with Adam and use class weights, with higher weights for the less frequent attributes because the distribution of the attributes is imbalanced. As BERT comes with 12 layers, we learn a scalar-weighted combination of them. We always apply a dropout of 0.8. As the best model, we pick the one that achieves the best macro F1 score on the validation set after not improving for 5 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "We use two baselines: We train each model with either using only the adjective or only the noun embedding as input. For the contextualized embeddings, we use the respective embedding after contextualization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "Note that our goal was not to find the best model for the task but to investigate how well a simple model can generalize for the task if it has been trained on a sufficient amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling",
"sec_num": "4.1"
},
{
"text": "(i) Generalization One of the research questions we want to answer with the experiment is in which way the automatic models can learn abstractions only on the basis of semantically related adjectivenoun pairs. If the model has seen phrases like black limousine and yellow truck in training, is it able to learn the abstract attribute perception and predict correctly for test phrases, such as red car?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "In the best case, although the model has neither seen red nor car in the training set, it can arrive at the correct solution via lexical similarities: it has learned that colors express perception when combined with e.g. artifacts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "As mentioned in Section 3.2, , it has been shown for other tasks in lexical semantics that the abstraction ability of automatic models in supervised learning is diminished if constituents of the phrase in the test set have already occurred in training. It may then be easier for the model to memorize the most frequent or only class label for specific words to solve the task. We investigate to what extent that phenomenon applies to attribute selection. Especially for adjectives that occur with only one attribute, this effect would be expected. This phenomenon could have a particularly negative effect for ambiguous adjectives: In the worst case, lexical memorization overwrites the less frequent sense as only the most dominant attribute is predicted. Table 3 shows the results for both embedding types for the different training data and the adjective and noun baseline. We report the average macro F1 score for all attributes, so each attribute is scored equally, regardless of the number of test instances.",
"cite_spans": [],
"ref_spans": [
{
"start": 757,
"end": 764,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "First, it becomes clear that both models are capable of abstracting to some degree with fastText outperforming BERT by 6%. It is particularly interesting that there is hardly any difference between the small and the large data set, although the large data set contains ten times more training instances. This demonstrates that it is not the size of the training data alone that matters for the generalization ability of the models. A sufficient lexical variety is much more important. This variety seems to be covered in the smaller training data set, such that an increase in size does not have a large effect on the general result. It is also evident that a partial overlap of adjectives and nouns leads to a significant improvement especially for BERT. This effect is similar on the smaller data set for modifier and head overlap, on the larger one a modifier overlap brings more advantages. The number of unique nouns is much higher in this data set, so it is less likely that lexical memorization can occur with the head overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "The results for the adjectives and noun baseline illustrate that while it is necessary to have both constituents as input for the models with fastText embeddings, the contextualization of the BERT embeddings is sufficient to convey almost the same information via one of the two contextualized vectors. In both cases the adjective baseline is stronger, indicating that the adjective plays a more important role for the task than the noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "(ii) Attributes Figure 4 and Figure 5 show the performance for each attribute on the large dataset, for no overlap, modifier overlap and head overlap. The attributes time, climate, perception and evaluation can be learned particularly well without overlap. A possible explanation is that adjectives and nouns selecting these attributes have a high semantic similarity. For example, adjectives selecting time are more similar to each other than adjectives selecting intensity. For such attributes, the generalization is more difficult. For instance, manner and intensity are not easy to predict despite a high amount of training data (14,084 and 8,714 training instances). Attributes that benefit most from lexical overlap are body, feeling, behavior, and motion.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 29,
"end": 37,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "(iii) Polysemy With respect to lexical memorization, the findings here are mixed. While across-theboard improvements for each attribute with modifier or head overlap indicate that this phenomenon takes place, the partial overlap does not automatically lead to predicting the attribute for the polysemous adjectives that has the highest frequency in the training data. Table 4 depicts how many of all the possible attributes for the ambiguous adjectives in the test set are covered. We sum the number of correctly recognized attributes for each adjective. Out of the total of 144, roughly two thirds are recognized by the models for each setup, the number is even higher for the modifier overlap. For instance, in the case of the adjective zart 'tender', substance, intensity and manner were recognized without overlap, while body was additionally recognized with the modifier overlap. Table 5 shows the average accuracy for adjectives with different degrees of ambiguity regarding their possible attributes. A lower degree of ambiguity leads to better results. For a higher degree of ambiguity the modifier overlap brings significant improvements so the models can learn to better distinguish the different senses for the adjectives based on the training data. It is also worth noting that there is a considerable jump in accuracy when we compare adjectives that co-occur with four or more attributes with those that select at most three attributes. (iv) Transparency To investigate the difference in the performance between collocations and free phrases, we use a smaller balanced test set (described in Section 3.3). Table 6 presents the results as the average of the Macro F1 scores of all 7 attributes in the test set. Overall, there is a consistent difference between collocations and free phrases across all training data: free phrases are more accurately predicted in all cases. Contextualized embeddings were ex-pected to yield better results for collocations because they are dynamically conditioned on the local context. Therefore, adjective and noun are represented by different vectors for different phrases. However, the model with BERT embeddings is worse if no lexical overlap is present. One reason for this may be that the contextualization of BERT does not give an advantage for a word-based task. It is more difficult to find regularities because the similarities between words could become blurred due to contextualization.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 885,
"end": 892,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1619,
"end": 1626,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "Although the performance for collocations is worse than for free phrases in general, for some attributes, the models are successful. This finding confirms the hypothesis that there are regularities also for collocations in spite of the general assump-tion of their idiosyncrasy. For instance, the attribute climate has a high F1 score for collocations in all experimental settings (between 0.67 and 0.87). It indicates that meaning shifts of the adjectives selecting this attribute are regular. Another example of such a regular meaning shift is provided by the polysemous adjective s\u00fc\u00df 'sweet'. In its literal meaning, it refers to the attribute perception as in s\u00fc\u00dfe Torte/Tee 'sweet cake/tea'. However, s\u00fc\u00df can also refer to the attribute evaluation when it is combined for instance with nouns from the semantic field 'person', as in s\u00fc\u00dfes Kind 'sweet child'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "By contrast, other collocations are highly lexicalized. These cases are hard to classify and remain a challenge. For instance, the models fail to predict the attribute evaluation for examples such as helle Zukunft 'bright future'. Table 6 : Average Macro F1 score for the balanced set in terms of collocations and free phrases for each training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "4.2"
},
{
"text": "In this paper we present a study on attribute selection in German adjective-noun phrases. Experiments in different training settings with and without lexical overlap show that it is possible to learn attribute selection patterns based on semantically related adjectives and nouns: abstract attributes such as perception, time, or society can be learned and predicted for new, unseen data. The results of the experiments with different lexical overlap settings are in line with previous research: partial lexical overlap leads to better results on this task. However, this is not only due to lexical memorization. The models are still able to decide which attribute to select for an ambiguous adjective in the test set if it appears in training with all its possible meanings, based on the nouns combined with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "The experiments confirm that attributes are more difficult to predict for collocations than for free phrases. However, not all types of collocations are equally difficult. Attributes can be learned correctly for collocations when the meaning shift occurs systematically. Strongly lexicalized collocations cannot benefit from these regularities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "As future work it would be interesting to investigate attribute-selection in other languages, e.g., in Russian. Compounding in Russian is not as productive as in German and the function of compounds is often taken over by adjective-noun phrases, so a higher degree of lexicalization would be expected. This could result in an even greater difference between collocations and free phrases. Secondly, it would be interesting to investigate how using a full sentence as context impacts the results, especially in ambiguous cases. For instance, the phrase st\u00fcrmischer Tag 'stormy day' can either express the attribute climate when the adjective is used in its literal sense or the attribute manner when stormy = chaotic. For such phrases, disambiguation is only possible in context. Finally, it would be useful if a model could learn a general intuition about whether a phrase is a collocation or a free phrase and which attributes are selected by an adjective in its literal and collocational senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "https://github.com/dbmdz/berts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank our student assistants Daniela Rossmann, Alina Leippert and Mareile Winkler for their help with the annotations. We are also very grateful to the anonymous reviewers for their insightful and helpful comments that helped us to improve the paper. Financial support of the research reported here has been provided by the grant Modellierung lexikalisch-semantischer Beziehungen von Kollokationen awarded by the Deutsche Forschungsgemeinschaft (DFG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Frames, concepts, and conceptual fields",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barsalou",
"suffix": ""
}
],
"year": 1992,
"venue": "Frames, fields, and contrasts: New essays in semantic and lexical organization",
"volume": "",
"issue": "",
"pages": "21--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou. 1992. Frames, concepts, and conceptual fields. In Frames, fields, and contrasts: New essays in semantic and lexical organization, pages 21-74. Lawrence Erlbaum Associates, Inc.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic transparency: challenges for distributional semantics",
"authors": [
{
"first": "Melanie",
"middle": [
"J"
],
"last": "Bell",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IWCS 2013 Workshop Towards a Formal Distributional Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melanie J. Bell and Martin Sch\u00e4fer. 2013. Semantic transparency: challenges for distributional seman- tics. In Proceedings of the IWCS 2013 Workshop Towards a Formal Distributional Semantics, pages 1-10, Potsdam, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the compositionality and semantic interpretation of English noun compounds",
"authors": [
{
"first": "Corina",
"middle": [],
"last": "Dima",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "27--39",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1604"
]
},
"num": null,
"urls": [],
"raw_text": "Corina Dima. 2016. On the compositionality and se- mantic interpretation of English noun compounds. In Proceedings of the 1st Workshop on Representa- tion Learning for NLP, pages 27-39, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Collocation classification with unsupervised relation vectors",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5765--5772",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1576"
]
},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa Anke, Steven Schockaert, and Leo Wan- ner. 2019. Collocation classification with unsuper- vised relation vectors. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5765-5772, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Frame semantics",
"authors": [
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1982,
"venue": "Linguistics in the Morning Calm",
"volume": "",
"issue": "",
"pages": "111--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles J. Fillmore. 1982. Frame semantics. In Lin- guistics in the Morning Calm, pages 111-137. Han- shin Publishing Co., Seoul, South Korea.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Studies in Lexical Relations",
"authors": [
{
"first": "Jeffrey",
"middle": [
"S"
],
"last": "Gruber",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey S. Gruber. 1965. Studies in Lexical Relations. Ph.D. thesis, MIT. Distributed by: Indiana Univer- sity Linguistics Club, Bloomington, Indiana.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "GermaNeta Lexical-Semantic Net for German",
"authors": [
{
"first": "Birgit",
"middle": [],
"last": "Hamp",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Feldweg",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Birgit Hamp and Helmut Feldweg. 1997. GermaNet - a Lexical-Semantic Net for German. In Proceedings of the ACL workshop Automatic Information Extrac- tion and Building of Lexical Semantic Resources for NLP Applications.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Hartung. 2015. Distributional Semantic Mod- els of Attribute Meaning in Adjectives and Nouns. Ph.D. thesis, Heidelberg University, Germany.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning compositionality functions on word embeddings for modelling attribute meaning in adjective-noun phrases",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Kaupmann",
"suffix": ""
},
{
"first": "Soufian",
"middle": [],
"last": "Jebbara",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "54--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Hartung, Fabian Kaupmann, Soufian Jebbara, and Philipp Cimiano. 2017. Learning composition- ality functions on word embeddings for modelling attribute meaning in adjective-noun phrases. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 54-64, Va- lencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "GernEdiT -The GermaNet Editing Tool",
"authors": [
{
"first": "Verena",
"middle": [],
"last": "Henrich",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Hinrichs",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh Conference on International Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2228--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Verena Henrich and Erhard Hinrichs. 2010. GernEdiT -The GermaNet Editing Tool. In Proceedings of the Seventh Conference on International Language Re- sources and Evaluation, pages 2228-2235.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantik der Adjektive des Deutschen",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Hundsnurscher",
"suffix": ""
},
{
"first": "Jochen",
"middle": [],
"last": "Splett",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-322-87624-9"
]
},
"num": null,
"urls": [],
"raw_text": "Franz Hundsnurscher and Jochen Splett. 1982. Se- mantik der Adjektive des Deutschen. VS Verlag f\u00fcr Sozialwissenschaften.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic Interpretation in Generative Grammar",
"authors": [
{
"first": "Ray",
"middle": [
"S"
],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray S. Jackendoff. 1972. Semantic Interpretation in Generative Grammar. MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the compositionality prediction of noun phrases using poincar\u00e9 embeddings",
"authors": [
{
"first": "Abhik",
"middle": [],
"last": "Jana",
"suffix": ""
},
{
"first": "Dima",
"middle": [],
"last": "Puzyrev",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3263--3274",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1316"
]
},
"num": null,
"urls": [],
"raw_text": "Abhik Jana, Dima Puzyrev, Alexander Panchenko, Pawan Goyal, Chris Biemann, and Animesh Mukherjee. 2019. On the compositionality predic- tion of noun phrases using poincar\u00e9 embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3263-3274, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stylebook for the T\u00fcbingen Treebank of Dependencyparsed German (T\u00fcBa-D/DP). Seminar f\u00fcr Sprachwissenschaft",
"authors": [
{
"first": "Kok",
"middle": [],
"last": "Dani\u00ebl De",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "P\u00fctz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani\u00ebl de Kok and Sebastian P\u00fctz. 2019. Style- book for the T\u00fcbingen Treebank of Dependency- parsed German (T\u00fcBa-D/DP). Seminar f\u00fcr Sprach- wissenschaft, Universit\u00e4t T\u00fcbingen, T\u00fcbingen, Ger- many.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Do supervised distributional methods really learn lexical inference relations?",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "970--976",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional meth- ods really learn lexical inference relations? In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970-976, Denver, Colorado. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An empirical study on compositionality in compound nouns",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "210--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Diana McCarthy, and Suresh Manand- har. 2011. An empirical study on compositional- ity in compound nouns. In Proceedings of 5th In- ternational Joint Conference on Natural Language Processing, pages 210-218, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Lexicographer-Friendly Association Score",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Rychly",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2nd Workshop on Recent Advances in Slavonic Natural Languages Processing",
"volume": "",
"issue": "",
"pages": "6--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Rychly. 2008. A Lexicographer-Friendly Associ- ation Score. In Sojka, Petr /Hor\u00e1k, Ale\u0161 (Hg.): Pro- ceedings of the 2nd Workshop on Recent Advances in Slavonic Natural Languages Processing, RASLAN 2008, pages 6-9, Brno.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Processing and Querying Large Web Corpora with the COW14 Architecture",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Challenges in the Management of Large Corpora 3 (CMLC-3)",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer. 2015. Processing and Querying Large Web Corpora with the COW14 Architecture. In Pro- ceedings of Challenges in the Management of Large Corpora 3 (CMLC-3), pages 28-34, Lancaster, UK. UCREL, IDS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Building Large Corpora from the Web Using a New Efficient Tool Chain",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Bildhauer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "486--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Build- ing Large Corpora from the Web Using a New Ef- ficient Tool Chain. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), pages 486-493, Istan- bul, Turkey. European Language Resources Associ- ation (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Still a pain in the neck: Evaluating text representations on lexical composition. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "7",
"issue": "",
"pages": "403--419",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00277"
]
},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz and Ido Dagan. 2019. Still a pain in the neck: Evaluating text representations on lexical com- position. Transactions of the Association for Com- putational Linguistics, 7:403-419.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Olive oil is made of olives, baby oil is made for babies: Interpreting noun compounds using paraphrases in a neural model",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Waterson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "218--224",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2035"
]
},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz and Chris Waterson. 2018. Olive oil is made of olives, baby oil is made for babies: Inter- preting noun compounds using paraphrases in a neu- ral model. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 218-224, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "All that glitters is not gold: A gold standard of adjectivenoun collocations for German",
"authors": [
{
"first": "Yana",
"middle": [],
"last": "Strakatova",
"suffix": ""
},
{
"first": "Neele",
"middle": [],
"last": "Falk",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Fuhrmann",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Hinrichs",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Rossmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4368--4378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yana Strakatova, Neele Falk, Isabel Fuhrmann, Erhard Hinrichs, and Daniela Rossmann. 2020. All that glitters is not gold: A gold standard of adjective- noun collocations for German. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 4368-4378, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Negative Contexts. Collocation, polarity, and multiple negation",
"authors": [
{
"first": "Ton",
"middle": [],
"last": "Van Der Wouden",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ton van der Wouden. 1997. Negative Contexts. Collo- cation, polarity, and multiple negation. Routledge.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A part of the taxonomy of adjectives in Ger-maNet for tief 'deep' and salzig 'salty'. The top node is used as attribute label to annotate the GerCo dataset",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Distribution of free phrases and collocations in the GerCo+ dataset for each attribute.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Distribution of the number of different attributes per adjective.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "General Macro F1 for each attribute for fastText -each training set",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>data</td><td>size</td><td>adj</td><td colspan=\"2\">nn correct</td></tr><tr><td colspan=\"2\">gold standard 3,093</td><td>46</td><td>2,030</td><td>-</td></tr><tr><td>small</td><td>21,498</td><td>1,980</td><td>2,538</td><td>80%</td></tr><tr><td>large</td><td colspan=\"3\">232,389 4,630 36,659</td><td>79%</td></tr></table>",
"text": "gives an overview of the data."
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Data overview: the amount of phrases, unique adjectives, unique nouns and the amount of correct phrases in the random sample extracted from each augmented dataset and evaluated manually."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"4\">: Average Macro F1 Score over all attributes for</td></tr><tr><td colspan=\"4\">each training set. The results are presented for train-</td></tr><tr><td colspan=\"4\">ing on the adjective and noun (both), and for the two</td></tr><tr><td colspan=\"4\">baselines: trained only on adjectives (adj) and only on</td></tr><tr><td>nouns (noun)</td><td/><td/><td/></tr><tr><td colspan=\"4\">training set no overlap modifier overlap head overlap</td></tr><tr><td>fastText</td><td>97</td><td>105</td><td>99</td></tr><tr><td>BERT</td><td>95</td><td>105</td><td>99</td></tr></table>",
"text": ""
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Number of correctly predicted senses of polysemous adjectives for each embedding type and each training setup trained on the large dataset; the total number of different senses in the test data: 144."
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Average accuracy for all adjectives with a spe-</td></tr><tr><td>cific number of possible attributes (no. attr) for the</td></tr><tr><td>setup with no overlap (no), modifier overlap (mod) and</td></tr><tr><td>head overlap (head).</td></tr></table>",
"text": ""
}
}
}
}