|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:09:25.851428Z" |
|
}, |
|
"title": "How does BERT capture semantics? A closer look at polysemous words", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yenicelik", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "ETH Z\u00fcrich", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "ETH Z\u00fcrich", |
|
"location": {} |
|
}, |
|
"email": "florian.schmidt@inf.ethz.ch" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The recent paradigm shift to contextual word embeddings has seen tremendous success across a wide range of downstream tasks. However, little is known on how the emergent relation of context and semantics manifests geometrically. We investigate polysemous words as one particularly prominent instance of semantic organization. Our rigorous quantitative analysis of linear separability and cluster organization in embedding vectors produced by BERT shows that semantics do not surface as isolated clusters but form seamless structures, tightly coupled with sentiment and syntax.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The recent paradigm shift to contextual word embeddings has seen tremendous success across a wide range of downstream tasks. However, little is known on how the emergent relation of context and semantics manifests geometrically. We investigate polysemous words as one particularly prominent instance of semantic organization. Our rigorous quantitative analysis of linear separability and cluster organization in embedding vectors produced by BERT shows that semantics do not surface as isolated clusters but form seamless structures, tightly coupled with sentiment and syntax.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word embeddings have not only proven to be excellent representations in standalone tasks (Mikolov et al., 2013; Pennington et al., 2014; Wang et al., 2019) but have revolutionized the way modern NLP architectures are built (Collobert et al., 2011) , and by now encode text input for virtually every task available. Recently, this approach has been paired with the transformer architecture (Vaswani et al., 2017) and a selection of pre-training tasks to bootstrap more powerful contextual word embeddings such as the ones produced by BERT (Devlin et al., 2018) . s The paradigm of encoding a word in its context has elevated the embedding methodology once more and from several perspectives. First, performance improvements on down-stream tasks are extraordinary across a wide range of tasks (Ethayarajh, 2019; Devlin et al., 2018; Wang et al., 2018) . Second, the embedding space now must incorporate a vastly larger number of vectors, and its organization becomes an interesting research question on its own, especially given the largely unattributed performance gains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 111, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 136, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 155, |
|
"text": "Wang et al., 2019)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 247, |
|
"text": "(Collobert et al., 2011)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 411, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 559, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 791, |
|
"end": 809, |
|
"text": "(Ethayarajh, 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 830, |
|
"text": "Devlin et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 849, |
|
"text": "Wang et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we investigate the important concept of polysemy as one prominent example of semantic sub-space organization. Given that a word such as 'bank' can have several meanings, how are the corresponding vectors arranged in a contextual embedding space?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We investigate the organization of polysemous words in BERT embeddings through the concepts of separability and clusterability using the Word-Net annotations in SemCor (Miller et al., 1990) . Our particular focus is a rigorous quantitative rather than purely qualitative analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 189, |
|
"text": "(Miller et al., 1990)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Work connecting polysemy and word vector representations is often limited to static word embeddings where context has to be re-introduced through graph-based approaches (Remus and Biemann, 2018) , auxiliary corpora (Pelevina et al., 2016) , or even image data (Bruni et al., 2013) . Usually, word sense disambiguation (WSD) performance is then chosen as a proxy to semantic disambiguation (Pilehvar and Camacho-Collados, 2019 ), yet no insights into the organization of the vector space are obtained. In a similar spirit, Kageback and Salomonsson (2016) add context through a recurrent encoder, yet do not analyze the geometry of these encodings. In the meantime, the WSD task has been tackled successfully with BERT embeddings and Wiedemann et al. (2019) show that even a non-parametric approach suffices, which confirms that BERT must arrange word vectors according to semantic properties and suggests that no additional semantic pretraining is necessary (Levine et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 194, |
|
"text": "(Remus and Biemann, 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 238, |
|
"text": "(Pelevina et al., 2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 280, |
|
"text": "(Bruni et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 425, |
|
"text": "(Pilehvar and Camacho-Collados, 2019", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 732, |
|
"end": 755, |
|
"text": "Wiedemann et al. (2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 957, |
|
"end": 978, |
|
"text": "(Levine et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When BERT embeddings of a polysemous word are analyzed, the findings are often summarized as a Silhouette score (Rousseeuw, 1987) or custom variance measures (Ethayarajh, 2019) . While this allows to compare the average displacement due to semantic change across words, it does not give us a good sense of the overall structure of word vectors. In addition, the embedding space produced by BERT has been analyzed in terms of syntactic features, such as parse-trees (Coenen et al., 2019; Jawahar et al., 2019) , part-of-speech, verbs and arguments (Shi et al., 2019; Ribeiro et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 129, |
|
"text": "(Rousseeuw, 1987)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 176, |
|
"text": "(Ethayarajh, 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 486, |
|
"text": "(Coenen et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 508, |
|
"text": "Jawahar et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 565, |
|
"text": "(Shi et al., 2019;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 566, |
|
"end": 587, |
|
"text": "Ribeiro et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It is clear that BERT distinguishes polysemous words at least locally by nearest neighbors (Schmidt and Hofmann, 2020) . However, the extent to which clusters are formed and how they are connected has only been addressed qualitatively (Coenen et al., 2019; Jawahar et al., 2019; Wiedemann et al., 2019) , and no agreed-upon answer has emerged. This can be partly attributed to their qualitative methodology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 118, |
|
"text": "(Schmidt and Hofmann, 2020)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 256, |
|
"text": "(Coenen et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 278, |
|
"text": "Jawahar et al., 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 302, |
|
"text": "Wiedemann et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "How can we verify a hypothesis about the organization of polysemous words without manually inspecting the geometry for each word? Given a set of sentences with annotated polysemy, two strategies emerge: First, we can inspect the embedding space through the lens of a classifier with a clearly defined hypothesis set and take its accuracy as a signifier for the corresponding organization. Second, we can use an unsupervised approach to detect sub-space organization and compare its result to the WordNet labels using an appropriate similarity metric. We will proceed by analyzing both questions. In our experiments, we consider the output of the last layer of BERT as the contextual word embeddings since this layer is most commonly used for downstream tasks, as depicted in Figure 1 . To work with a discrete formalization of semantics, we use the WordNet 3.0 annotations in the SemCor 3.0 sentence dataset (Miller et al., 1990) . This allows us to retrieve embeddings that are annotated with a ground-truth semantic class label. SemCor is one of the largest sense-annotated corpora with 37, 176 sentences, enabling us to quantify semantics and sample labelled word embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 908, |
|
"end": 929, |
|
"text": "(Miller et al., 1990)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 775, |
|
"end": 783, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Before turning to the clustering task, we investigate to what degree semantic classes can be separated by a hyperplane in embedding space, resulting in semantic regions as depicted in Figure 2 . To this end, we train a simple linear classifier on top of the BERT embeddings (without fine-tuning) to predict the semantic class and report accuracy. Crucially, we down-project the 768-dimensional vectors using PCA ensuring that separability is not merely a consequence of high dimensionality. In the highdimensional setting, this allows to assess to what extent semantic regions do form.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 192, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear Separability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Once the presence of semantic regions has been concluded, we want to investigate the extent to which clusters form and how they are connected. For this, we train clustering models to understand the modality of the data, and to what extent clusters are in isolation from each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Because we are interested in practical gains, we refrain from using purely theoretical tools and clusterability scores (Ackerman and Ben-David, 2009; Mccarthy et al., 2016) . In contrast, we use interpretable clustering models (Frey and Dueck, 2007; Ester et al., 1996; Campello et al., 2013 ; Comani- ciu and Meer, 2002) that can detect the number of clusters in the data, as well as an adapted version of the Chinese Whispers algorithm (Biemann, 2006) that accounts for the hubness property 1 amongst embedding vectors outputted by BERT. The Chinese Whispers algorithm relies on a graph produced by the word embeddings and identifies clusters by passing messages between the nodes of the graph. From the sampled word embeddings we create the graph adjacency matrix M by calculating the pairwise cosine similarity between embeddings, and similar to Ribeiro et al. (2019) , prune any edges which correspond to a cosine similarity lower than", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 149, |
|
"text": "(Ackerman and Ben-David, 2009;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 172, |
|
"text": "Mccarthy et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 249, |
|
"text": "(Frey and Dueck, 2007;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 269, |
|
"text": "Ester et al., 1996;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 291, |
|
"text": "Campello et al., 2013", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 453, |
|
"text": "(Biemann, 2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 871, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w cutoff = \u00b5(M ) + c (M )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where c is a hyperparameter, and \u00b5 and are the mean and standard deviation of all cosine similarities recorded in M . Hubs are defined as the top n embedding vectors with highest cumulative cosine similarities. The development and test sets consist of n 2 words respectively, including their set of sampled embedding vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To score the overlap between a predicted clustering and the underlying ground-truth labels, we use the Adjusted Random Index (ARI) (Rand, 1971; Hubert and Arabie, 1985) , which returns a similarity measure where a value of 1 implies an identical clustering up to a permutation and a value of 0 implies random predictions. Please note that this also introduces a small penalty when more clusters are introduced than actually present in the dataset according to the cluster-class-labels. However, pre-1 hubs are embeddings close to a majority of other embedding vectors, degrading performance (Conneau et al., 2017) venting this is not in the scope of this work, and as such we do not further investigate this.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 143, |
|
"text": "(Rand, 1971;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 168, |
|
"text": "Hubert and Arabie, 1985)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 613, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "If no clustering is found, one can say with high confidence that the semantic regions are not occurring in different modes, and rather transition seamlessly into one another. This allows for an assessment in high dimensional space to what extent semantic regions are obvious, apparent by distinct modes. The motivation behind both experiments is visually depicted in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 375, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We proceed with a discussion of the experimental results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "First we aim to develop an understanding of bias in SemCor, as any bias in the data will propagate on to further observations. We conduct a simple experiment where we analyse the distribution of occurrences of semantic class ids. Figure 3 : A cumulative plot over all words with Word-Net senses within SemCor 3.0 and their respective frequencies. The SemCor data is biased. Words with a low WordNet sense index, i.e. close to 0, occur more often than words with a high WordNet sense index, i.e. above 5. There would be no bias if the two distributions would overlap. The skew could be a natural effect of how words with lower WordNet indices are assigned to more frequently used words. Figure 3 depicts that the SemCor corpus is biased towards semantic classes which have a lower Word-Net class ID. This could be due to the nature of WordNet, likely assigning low id indices to frequently used words. This requires us to oversample underrepresented classes for select experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 238, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 694, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bias in SemCor", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We now turn our attention to what extent closed semantic regions exist in the embedding space. For a fixed word w that frequently occurs in SemCor, we sample up to n = 500 embedding vectors, apply 5-fold cross-validation, and oversample any imbalanced-class datasamples. The input is normalized, and we apply dimensionality reduction using PCA to k components, ensuring that each of the semantic classes contains at least 20 samples in the dataset. We use the SemCor semantic class labels as the response variables for the classification task. We only include semantic classes for the given word, leaving us with few class-labels. Results are shown in Table 1 Accuracy rates of over 75% are achieved with k = 20. The % variance refers to the explainable variance when the largest k eigenvalues are kept, as calculated by P k i i where i is the ith largest eigenvalue, hinting to how much information according to the largest k principal components are kept. Similar results are achieved for 2-class and multi-class classification tasks with other words (see Appendix A.2). We conclude that the individual semantic classes are -to a reasonable extent -linearly separable. As such, contextual word embeddings are not randomly distributed over the embedding space, and closed semantic regions do form.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 652, |
|
"end": 659, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear Separability", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Before we analyze the structure of individual semantic classes, we want to understand how polysemy relates to the mean standard deviation of all contextual word embeddings X sampled for the word w. This helps us to understand how we need to adapt different clustering models. Figure 4 : For each word w, we sample up to n = 500 contextual word embeddings X. We calculate the mean standard-deviation across embedding-dimensions as", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 284, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Polysemy vs. Variance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "P n i P d j x i j where x j i 2 R d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polysemy vs. Variance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "is the jth dimension of the ith sampled embedding vector for word w. Word-Net is used to retrieve the number of semantic classes of w, denoting the amount of polysemy of word w. Figure 4 shows that polysemous words have high variance, an idea initially put forth by Miller and Charles (1991) . As such, vectors of polysemous words seem to be distributed at least as dispersed around the space as non-polysemous words do. Notice that the converse is not true, as there are non-polysemous words that have high variance. Amongst others, these could include stopwords as hinted by Ethayarajh (2019).", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 291, |
|
"text": "Miller and Charles (1991)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 186, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Polysemy vs. Variance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We now want to understand to what extent distinct semantic clusters exist. For a set of words w 1 , . . . , w n , we sample up to n = 500 embedding vectors per word from SemCor and the news.2007.corpus 2 and apply dimensionality reduction using PCA to k dimensions. Due to the limited size of SemCor, we set the words in the development set to was, thought, made, only, central, pizza and the set of words in the test set to run, round, down, bank, key, arms. We include both polysemous words, as well as words which have a single recorded WordNet meaning, such that our experiments do not overfit to polysemous words. With default-package hyperparameters, all clustering algorithms would indicate that no distinct clustering could be found, i.e. the sampled word embeddings would form a continuous density. Because we want to see to what extent BERT conforms to commonly accepted linguistic senses as given by WordNet, we apply the NetworkX Chinese Whispers implementation (Hagberg et al., 2006) on the resulting graph. Hyperparameters and their respective bounds for all clustering models are listed in Appendix A.1. We include the [SEP] tag at the end of the sentence, as this increases performance on all clustering methods. The ARI is exclusively calculated on samples for which we have the ground-truth cluster label and stems from the mean of multiple such word clusterings. We notice that choosing suitable hyperparameters is non-trivial and thus apply automated model-and hyperparameter selection, making use of random search (Bergstra and Bengio, 2012) and bayesian optimization (Wang et al., 2013) The models used and their maximal performance after 300 trials of hyperparameters search are recorded in Table 2 . Our modified Chinese Whispers algorithm is the best-performing clustering model. However, with an ARI score of 0.457, this method is not able to perfectly distinguish between multiple WordNet semantic classes 4 . To understand why this is the case, we proceed with a qualitative evaluation of some resulting clusters. One such clustering is depicted in Table 3 , presenting four partitions for arms 5 . We achieve similar such results for 9 other words but focus on one example for conciseness. Notice that the clusters differ not only in semantics but also in other linguistic phenomena, most notably sentiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 974, |
|
"end": 996, |
|
"text": "(Hagberg et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1535, |
|
"end": 1562, |
|
"text": "(Bergstra and Bengio, 2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1589, |
|
"end": 1608, |
|
"text": "(Wang et al., 2013)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1714, |
|
"end": 1721, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 2077, |
|
"end": 2084, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Given the quantitative and qualitative evaluation, we conclude that one cannot generalize that a clear ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clusterability", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The classic years of the arms race, the 1950s and '60s before . . . distinction between semantic concepts in contextual word embeddings produced by BERT exists. One of numerous counterexamples is underlined in the left visualization of Figure 1 . Certain combinations of semantics, syntax, and sentiment are more frequent than others (Hagoort, 2003; May et al., 2019) , likely affecting the subspace structure and sometimes resulting in clusters that are distinct due to their simultaneous difference in both semantic and syntactic features (see Appendix A.3). However, this work also poses the question to what extent rule-based and handcrafted notions of semantics, such as the ones given by WordNet, are appropriate, opening the question to what extent BERT actually encodes a more flexible notion of semantics that is not rooted in hard distinctions between senses. We leave analysis in this direction to future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 349, |
|
"text": "(Hagoort, 2003;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 367, |
|
"text": "May et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 244, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we investigated how contextual word embeddings produced by BERT capture semantic concepts with a strong focus on polysemy. Our findings show that BERT creates closed semantic regions that are not clearly distinguishable from each other, seamlessly transitioning from one into another. We have shown that subspace organization is not purely determined by semantics. Instead, it is also intertwined with concepts such as syntax and sentiment. Finally, the repeated limitations of hard distinctions between senses as given via WordNet also open up the question to what extent BERT adds a more flexible notion of semantics, compared to the hard-coded examples formed by linguists. A better understanding of these relations will be key to developing more interpretable and expressive word embeddings, as well as linguistic knowledge representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/wmt14/ training-monolingual-news-crawl/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the implementation by https://github. com/facebook/Ax 4 An ARI score of at least 0.7 is desirable to conclude a significant overlap between two clusters 5 SeeTable 8 for a complete example clustering", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "David Yenicelik would like to thank Jason Lee (NYU) for the support setting an initial research question investigating on the structure of contextual word embeddings used in translation tasks, Jeremy Scheurer and Gertrude Yenicelik for discussions and comments, as well as Prof. Thomas Hofmann (ETH Z\u00fcrich) for valuable discussions, guidance and enabling to conduct this project as part of the Master's thesis. Finally, the authors thank the anonymous reviewers for their constructive feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Clusterability: A theoretical study", |
|
"authors": [ |
|
{ |
|
"first": "Margareta", |
|
"middle": [], |
|
"last": "Ackerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Ben-David", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margareta Ackerman and Shai Ben-David. 2009. Clus- terability: A theoretical study. Proceedings of the Twelth International Conference on Artificial Intelli- gence and Statistics, PMLR 5:1-8.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Random search for hyper-parameter optimization", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bergstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "281--305", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research 13, pages 281-305.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Chinese whispers: An efficient graph clustering algorithm and its application to natural language processing problems", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Workshop on Graph-based Methods for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Biemann. 2006. Chinese whispers: An effi- cient graph clustering algorithm and its application to natural language processing problems. Workshop on Graph-based Methods for Natural Language Process- ing, pages 73-80.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multimodal distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam", |
|
"middle": [ |
|
"Khanh" |
|
], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "1--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2013. Multimodal distributional semantics. Journal of Artifi- cial Intelligence Research 49, pages 1-47.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Density-based clusteering based on hierarchical density estimates", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"G B" |
|
], |
|
"last": "Ricardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Davoud", |
|
"middle": [], |
|
"last": "Campello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joerg", |
|
"middle": [], |
|
"last": "Moulavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sander", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "PAKDD 2013: Advances in Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--172", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ricardo J. G. B. Campello, Davoud Moulavi, and Joerg Sander. 2013. Density-based clusteering based on hier- archical density estimates. PAKDD 2013: Advances in Knowledge Discovery and Data Mining, pages 160- 172.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Visualizing and measuring the geometry of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Coenen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Reif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Been", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pearce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Vi\u00e9gas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Vi\u00e9gas, and Martin Watten- berg. 2019. Visualizing and measuring the geometry of BERT. Advances in Neural Information Processing Systems 32 (NeurIPS 2019).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. CoRR, abs/1103.0398.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Mean shift: A robust approach toward feature space analysis", |
|
"authors": [ |
|
{ |
|
"first": "Dorin", |
|
"middle": [], |
|
"last": "Comaniciu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Meer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "603--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dorin Comaniciu and Peter Meer. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, pages 603-619.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word trans- lation without parallel data. ICLR 2018.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A density-based algorithm for discovering clusters", |
|
"authors": [ |
|
{ |
|
"first": "Matrin", |
|
"middle": [], |
|
"last": "Ester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans-Peter", |
|
"middle": [], |
|
"last": "Kriegel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joerg", |
|
"middle": [], |
|
"last": "Sander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaowei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "KDD-96 Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matrin Ester, Hans-Peter Kriegel, Joerg Sander, and Xi- aowei Xu. 1996. A density-based algorithm for discov- ering clusters. KDD-96 Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Kawin", |
|
"middle": [], |
|
"last": "Ethayarajh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the geome- try of BERT, ELMo, and GPT-2 embeddings. EMNLP 2019.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Clustering by passing messages between data points", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Delbert", |
|
"middle": [], |
|
"last": "Frey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dueck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Science", |
|
"volume": "315", |
|
"issue": "5814", |
|
"pages": "972--976", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan J. Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science 315 (5814), pages 972-976.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Interplay between syntax and semantics during sentence comprehension: Erp effects of combining syntactic and semantic violations", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Hagoort", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Cognitive Neuroscience", |
|
"volume": "15", |
|
"issue": "6", |
|
"pages": "883--899", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Hagoort. 2003. Interplay between syntax and se- mantics during sentence comprehension: Erp effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience 15:6, pages 883-899.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Comparing partitions", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Hubert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phipps", |
|
"middle": [], |
|
"last": "Arabie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Journal of Classification", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence Hubert and Phipps Arabie. 1985. Compar- ing partitions. Journal of Classification, pages 193- 218.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "What does BERT learn about the structure of language", |
|
"authors": [ |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Jawahar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benoit", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djame", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings ofthe 57th Annual Meeting ofthe Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3651--3657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganesh Jawahar, Benoit Sagot, and Djame Seddah. 2019. What does BERT learn about the structure of language? Proceedings ofthe 57th Annual Meeting ofthe Association for Computational Linguistics, pages 3651-3657.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Word sense disambiguation using a bidirectional lstm", |
|
"authors": [ |
|
{ |
|
"first": "Mikael", |
|
"middle": [], |
|
"last": "Kageback", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Salomonsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikael Kageback and Hans Salomonsson. 2016. Word sense disambiguation using a bidirectional lstm. Pro- ceedings of the 5th Workshop on Cognitive Aspects of the Lexicon, pages 51-56.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "SenseBERT: Driving some sense into BERT", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Levine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barak", |
|
"middle": [], |
|
"last": "Lenz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Or", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Padnos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Or", |
|
"middle": [], |
|
"last": "Sharir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Shalev-Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amnon", |
|
"middle": [], |
|
"last": "Shashua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Shoham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2019. SenseBERT: Driving some sense into BERT. ACL 2020.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "On measuring social biases in sentence encoders", |
|
"authors": [ |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1903.10561" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. arXiv:1903.10561.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Word sense clustering and clusterability", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computational Linguistics", |
|
"volume": "42", |
|
"issue": "2", |
|
"pages": "245--275", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana Mccarthy, Marianna Apidianaki, and Katrin Erk. 2016. Word sense clustering and clusterability. Com- putational Linguistics, Volume 42, Issue 2 -June 2016, pages 245-275.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality. CoRR, abs/1310.4546.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Introduction to wordnet: An on-line lexical database", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Beckwith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "235--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. In- troduction to wordnet: An on-line lexical database. In- ternational Journal of Lexicography, pages 235-244.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Contextual correlates of semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Charles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Language and cognitive processes", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller and Walter G. Charles. 1991. Con- textual correlates of semantic similarity. Language and cognitive processes 6, pages 1-28.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Making sense of word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pelevina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolay", |
|
"middle": [], |
|
"last": "Arefyev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings ofthe 1st Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "174--183", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pelevina, Nikolay Arefyev, Chris Biemann, and Alexander Panchenko. 2016. Making sense of word embeddings. Proceedings ofthe 1st Workshop on Rep- resentation Learning for NLP, pages 174-183.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word repre- sentation. EMNLP, page 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Wic: the word-in-context dataset for evaluating context-sensitive meaning representations", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Taher Pilehvar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Taher Pilehvar and Jose Camacho- Collados. 2019. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. NAACL 2019.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Objective criteria for the evaluation of clustering methods", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "846--850", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William M. Rand. 1971. Objective criteria for the eval- uation of clustering methods. Journal of the American Statistical Association, pages 846-850.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Retrofitting word representations for unsupervised sense aware word similarities", |
|
"authors": [ |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Remus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steffen Remus and Chris Biemann. 2018. Retrofitting word representations for unsupervised sense aware word similarities. Proceedings of the Seventeenth Con- ference on Computational Natural Language Learning, pages 143-152.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "L 2 f/inesc-id at semeval-2019 task 2: Unsupervised lexical semantic frame induction using contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Eug\u00e9nio", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e2nia", |
|
"middle": [], |
|
"last": "Mendon\u00e7a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martins E Matos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Sardinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [ |
|
"Lucia" |
|
], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu\u00edsa", |
|
"middle": [], |
|
"last": "Coheur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings ofthe 13th International Workshop on Semantic Evaluation (SemEval-2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eug\u00e9nio Ribeiro, V\u00e2nia Mendon\u00e7a, Ricardo Ribeiro, David Martins e Matos, Alberto Sardinha, Ana Lucia Santos, and Lu\u00edsa Coheur. 2019. L 2 f/inesc-id at semeval-2019 task 2: Unsupervised lexical semantic frame induction using contextualized word representa- tions. Proceedings ofthe 13th International Workshop on Semantic Evaluation (SemEval-2019), pages 130- 136.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rousseeuw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Computational and Applied Mathematics", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "53--65", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter J. Rousseeuw. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Computational and Applied Mathematics 20, pages 53- 65.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Bert as a teacher: Contextual embeddings for sequence-level reward", |
|
"authors": [ |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Florian Schmidt and Thomas Hofmann. 2020. Bert as a teacher: Contextual embeddings for sequence-level reward. ArXiv, abs/2003.02738.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Simple BERT models for relation extraction and semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "David R Cheriton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.05255" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Shi, Jimmy Lin, and David R Cheriton. 2019. Simple BERT models for relation extraction and se- mantic role labeling. arXiv:1904.05255.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. CoRR, abs/1804.07461.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Evaluating word embedding models: Methods and experimental results", |
|
"authors": [ |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fenxiao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuncheng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-C. Jay", |
|
"middle": [], |
|
"last": "Kuo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bin Wang, Angela Wang, Fenxiao Chen, Yuncheng Wang, and C.-C. Jay Kuo. 2019. Evaluating word embedding models: Methods and experimental results. CoRR, abs/1901.09785.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Bayesian optimization in high dimensions via random embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Ziyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masrour", |
|
"middle": [], |
|
"last": "Zoghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u2020", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Matheson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nando", |
|
"middle": [], |
|
"last": "De Freitas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Twenty-Third International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziyu Wang, Masrour Zoghi \u2020, Frank Hutter, David Matheson, and Nando de Freitas. 2013. Bayesian op- timization in high dimensions via random embeddings. AAAI Publications, Twenty-Third International Joint Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Does BERT make any sense? interpretable word sense disambiguation with contextualized embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Gregor", |
|
"middle": [], |
|
"last": "Wiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Remus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avi", |
|
"middle": [], |
|
"last": "Chawla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Conference on Natural Language Processing (KONVENS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT make any sense? interpretable word sense disambiguation with contextu- alized embeddings. Conference on Natural Language Processing (KONVENS) 2019.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "PCA (left) and UMAP (right) visualizations for contextual word embeddings sampled for the word run. Red points denote nouns, blue points denote verbs", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Datapoints generated by sampling from a normal distribution. (left) includes class information denoted by the green, blue and red colors, forming semantic regions. This data is linearly separable, as there is always a hyperplane separating any two classes. (right) The class label information is not available. It is unclear how a clustering would look like.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Average mean and standard deviation of the accuracy of a linear classifier trained on the 2 most common semantic classes for the words was, one, is. The choice of words is limited to the datasize of SemCor to allow for a significant size of datasamples.", |
|
"content": "<table><tr><td/><td>.</td><td/></tr><tr><td>k</td><td colspan=\"2\">% variance accuracy (mean / std)</td></tr><tr><td>10</td><td>0.30</td><td>0.74 / 0.05</td></tr><tr><td>20</td><td>0.44</td><td>0.80 / 0.04</td></tr><tr><td>30</td><td>0.54</td><td>0.82 / 0.03</td></tr><tr><td>50</td><td>0.70</td><td>0.87 / 0.04</td></tr><tr><td>75</td><td>0.79</td><td>0.83 / 0.04</td></tr><tr><td>100</td><td>0.85</td><td>0.89 / 0.03</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "The maximum ARI scores achieved during hyperparameter optimization on different models for k = 20 and n = 1000.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Gotbaum tried to slide her handcuffed arms from her back to her front . . . 2 She swooped him up into her arms and kissed him madly . . . 3 . . . and shuttle robotic arms of a solar array and truss . . .", |
|
"content": "<table><tr><td>Partition</td><td>Representative Sample</td></tr><tr><td>1</td><td>Ms.</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Representative samples for the clusters</td></tr><tr><td>found by the best performing clustering model for the</td></tr><tr><td>word arms. Partitions 1-3 consider a person's arms,</td></tr><tr><td>whereas partition 4 considers arms as a synonym to</td></tr><tr><td>weaponry. Partitions 1, 2 and 3 strongly contrast in</td></tr><tr><td>sentiment (scared, loving, and confident respectively).</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |