{
"paper_id": "W12-0203",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:10:14.007428Z"
},
"title": "Looking at word meaning. An interactive visualization of Semantic Vector Spaces for Dutch synsets",
"authors": [
{
"first": "Kris",
"middle": [],
"last": "Heylen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leuven",
"location": {
"addrLine": "Blijde-Inkomsstraat 21/3308",
"postCode": "3000",
"settlement": "Leuven",
"country": "Belgium"
}
},
"email": "kris.heylen@arts.kuleuven.be"
},
{
"first": "Dirk",
"middle": [],
"last": "Speelman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leuven",
"location": {
"addrLine": "Blijde-Inkomsstraat 21/3308",
"postCode": "3000",
"settlement": "Leuven",
"country": "Belgium"
}
},
"email": "dirk.speelman@arts.kuleuven.be"
},
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leuven",
"location": {
"addrLine": "Blijde-Inkomsstraat 21/3308",
"postCode": "3000",
"settlement": "Leuven",
"country": "Belgium"
}
},
"email": "dirk.geeraerts@arts.kuleuven.be"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In statistical NLP, Semantic Vector Spaces (SVS) are the standard technique for the automatic modeling of lexical semantics. However, it is largely unclear how these black-box techniques exactly capture word meaning. To explore the way an SVS structures the individual occurrences of words, we use a non-parametric MDS solution of a token-by-token similarity matrix. The MDS solution is visualized in an interactive plot with the Google Chart Tools. As a case study, we look at the occurrences of 476 Dutch nouns grouped in 214 synsets.",
"pdf_parse": {
"paper_id": "W12-0203",
"_pdf_hash": "",
"abstract": [
{
"text": "In statistical NLP, Semantic Vector Spaces (SVS) are the standard technique for the automatic modeling of lexical semantics. However, it is largely unclear how these black-box techniques exactly capture word meaning. To explore the way an SVS structures the individual occurrences of words, we use a non-parametric MDS solution of a token-by-token similarity matrix. The MDS solution is visualized in an interactive plot with the Google Chart Tools. As a case study, we look at the occurrences of 476 Dutch nouns grouped in 214 synsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last twenty years, distributional models of semantics have become the standard way of modeling lexical semantics in statistical NLP. These models, aka Semantic Vector Spaces (SVSs) or Word Spaces, capture word meaning in terms of frequency distributions of words over cooccurring context words in a large corpus. The basic assumption of the approach is that words occurring in similar contexts will have a similar meaning. Speficic implementations of this general idea have been developed for a wide variety of computational linguistic tasks, including Thesaurus extraction and Word Sense Disambiguation, Question answering and the modeling of human behavior in psycholinguistic experiments (see Turney and Pantel (2010) for a general overview of applications and speficic models). In recent years, Semantic Vector Spaces have also seen applications in more traditional domains of linguistics, like diachronic lexical studies (Sagi et al., 2009; Cook and Stevenson, 2010; Rohrdantz et al., 2011) , or the study of lexical variation (Peirsman et al., 2010) . In this paper, we want to show how Semantic Vector Spaces can further aid the linguistic analysis of lexical semantics, provided that they are made accessible to lexicologists and lexicographers through a visualization of their output.",
"cite_spans": [
{
"start": 714,
"end": 727,
"text": "Pantel (2010)",
"ref_id": "BIBREF20"
},
{
"start": 933,
"end": 952,
"text": "(Sagi et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 953,
"end": 978,
"text": "Cook and Stevenson, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 979,
"end": 1002,
"text": "Rohrdantz et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 1039,
"end": 1062,
"text": "(Peirsman et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although all applications mentioned above assume that distributional models can capture word meaning to some extent, most of them use SVSs only in an indirect, black-box way, without analyzing which semantic properties and relations actually manifest themselves in the models. This is mainly a consequence of the task-based evaluation paradigm prevalent in Computational Linguistics: the researchers address a specific task for which there is a pre-defined gold standard; they implement a model with some new features, that usually stem from a fairly intuitive, commonsense reasoning of why some feature might benefit the task at hand; the new model is then tested against the gold standard data and there is an evaluation in terms of precision, recall and F-score. In rare cases, there is also an error analysis that leads to hypotheses about semantic characteristics that are not yet properly modeled. Yet hardly ever, there is in-depth analysis of which semantics the tested model actually captures. Even though taskbased evaluation and shared test data sets are vital to the objective comparison of computational approaches, they are, in our opinion, not sufficient to assess whether the phenomenon of lexical semantics is modeled adequately from a linguistic perspective. This lack of linguistic insight into the functioning of SVSs is also bemoaned in the community itself. For example, Baroni and Lenci (2011) say that \"To gain a real insight into the abilities of DSMs (Distributional Semantic Models, A/N) to address lexical semantics, existing benchmarks must be complemented with a more intrinsically oriented approach, to perform direct tests on the specific aspects of lexical knowledge captured by the models\". They go on to present their own lexical database that is similar to Word-Net, but includes some additional semantic relations. They propose researchers test their model against the database to find out which of the encoded relations it can detect. However, such an analysis still boils down to checking whether a model can replicate pre-defined structuralist semantic relations, which themselves represent a quite impoverished take on lexical semantics, at least from a linguistic perspective. In this paper, we want to argue that a more linguistically adequate investigation of how SVSs capture lexical semantics, should take a step back from the evalution-against-gold-standard paradigm and do a direct and unbiased analysis of the output of SVS models. Such an analysis should compare the SVS way of structuring semantics to the rich descriptive and theoretic models of lexical semantics that have been developed in Linguistics proper (see Geeraerts (2010b) for an overview of different research traditions). Such an in-depth, manual analyis has to be done by skilled lexicologists and lexicographers. But would linguists, that are traditionally seen as not very computationally oriented, be interested in doing what many Computational Linguists consider to be tedious manual analysis? The answer, we think, is yes. The last decade has seen a clear empirical turn in Linguistics that has led linguists to embrace advanced statistical analyses of large amounts of corpus data to substantiate their theoretical hypotheses (see e.g. Geeraerts (2010a) and other contributions in Glynn and Fischer (2010) on research in semantics). SVSs would be an ideal addition to those linguists' methodological repertoire. This creates the potential for a win-win situation: Computational linguists get an in-depth evaluation of their models, while theoretical linguists get a new tool for doing large scale empirical analyses of word meaning. Of course, one cannot just hand over a large matrix of word similaties (the raw output of an SVS) and ask a lexicologist what kind of semantics is \"in there\". Instead, a linguist needs an intuitive interface to explore the semantic structure captured by an SVS.",
"cite_spans": [
{
"start": 1393,
"end": 1416,
"text": "Baroni and Lenci (2011)",
"ref_id": "BIBREF0"
},
{
"start": 2668,
"end": 2685,
"text": "Geeraerts (2010b)",
"ref_id": "BIBREF5"
},
{
"start": 3258,
"end": 3275,
"text": "Geeraerts (2010a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we aim to present exactly that: an interactive visualization of a Semantic Vector Space Model that allows a lexicologist or lexicographer to inspect how the model structures the uses of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SVSs can model lexical semantics on two levels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token versus Type level",
"sec_num": "2"
},
{
"text": "1. the type level: aggregating over all occurrences of a word, giving a representation of a word's general semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token versus Type level",
"sec_num": "2"
},
{
"text": "2. the token level: representing the semantics of each individual occurrence of a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token versus Type level",
"sec_num": "2"
},
{
"text": "The type-level models are mostly used to retrieve semantic relations between words, e.g. synonyms in the task of thesaurus extraction. Token-level models are typically used to distinguish between the different meanings within the uses of one word, notably in the task of Word Sense Disambiguation or Word Sense Induction. Lexicological studies on the other hand, typically combine both perspectives: their scope is often defined on the type level as the different words of a lexical field or the set of near-synonyms referring to the same concept, but they then go on to do a fine-grained analysis on the token level of the uses of these words to find out how the semantic space is precisely structured. In our study, we will also take a concept-centered perspective and use as a starting point the 218 sets of Dutch near-synonymous nouns that Ruette et al. (2012) generated with their type-level SVS. For each synset, we then implement our own token-level SVS to model the individual occurrences of the nouns. The resulting token-by-token similarity matrix is then visualized to show how the occurrences of the different nouns are distributed over the semantic space that is defined by the synset's concept. Because Dutch has two national varieties (Belgium and the Netherlands) that show considerable lexical variation, and because this is typically of interest to lexicologists, we will also differentiate the Netherlandic and Belgian tokens in our SVS models and their visualization. The rest of this paper is structured as follows. In the next section we present the corpus and the near-synonym sets we used for our study. Section 4 presents the token-level SVS implemented for modeling the occurrences of the nouns in the synsets. In section 5 we discuss the visualization of the SVS's token-by-token similarity matrices with Multi Dimensional Scaling and the Google Visualization API. Finally, section 6 wraps up with conclusions and prospects for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token versus Type level",
"sec_num": "2"
},
{
"text": "The corpus for our study consists of Dutch newspaper materials from 1999 to 2005. For Netherlandic Dutch, we used the 500M words Twente Nieuws Corpus (Ordelman, 2002) 1 , and for Belgian Dutch, the Leuven Nieuws Corpus (aka Mediargus corpus, 1.3 million words 2 ). The corpora were automatically lemmatized, part-of-speech tagged and syntactically parsed with the Alpino parser (van Noord, 2006) . Ruette et al. (2012) used the same corpora for their semi-automatic generation of sets of Dutch near-synonymous nouns. They used a socalled dependency-based model (Pad\u00f3 and Lapata, 2007) , which is a type-level SVS that models the semantics of a target word as the weighted cooccurrence frequencies with context words that apear in a set of pre-defined dependency relations with the target (a.o. adjectives that modify the target noun, and verbs that have the target noun as their subject). Ruette et al. (2012) submitted the output of their SVS to a clustering algorithm known as Clustering by Committee (Pantel and Lin, 2002) . After some further manual cleaning, this resulted in 218 synsets containing 476 nouns in total. Next, we wanted the model the individual occurrences of the nouns. The token-level SVS we used is an adaptation the approach proposed by Sch\u00fctze (1998) . He models the semantics of a token as the frequency distribution over its so-called second order co-occurrences. These second-order co-occurrences are the type-level context features of the (first-order) context words co-occuring with the token. This way, a token's meaning is still modeled by the \"context\" it occurs in, but this context is now modeled itself by combining the type vectors of the words in the context. This higher order modeling is necessary to avoid data-sparseness: any token only occurs with a handful of other words and a first-order cooccurrence vector would thus be too sparse to do any meaningful vector comparison. Note that this approach first needs to construct a type-level SVS for the first-order context words that can then be used to create a second-order token-vector.",
"cite_spans": [
{
"start": 378,
"end": 395,
"text": "(van Noord, 2006)",
"ref_id": "BIBREF21"
},
{
"start": 398,
"end": 418,
"text": "Ruette et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 561,
"end": 584,
"text": "(Pad\u00f3 and Lapata, 2007)",
"ref_id": "BIBREF12"
},
{
"start": 889,
"end": 909,
"text": "Ruette et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 1003,
"end": 1025,
"text": "(Pantel and Lin, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 1261,
"end": 1275,
"text": "Sch\u00fctze (1998)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "In our study, we therefore first constructed a type-level SVS for the 573,127 words in our corpus with a frequency higher than 2. Since the focus of this study is visualization rather than finding optimal SVS parameter settings, we chose settings that proved optimal in our previous studies Peirsman et al., 2010) . For the context features of this SVS, we used a bag-of-words approach with a window of 4 to the left and right around the targets. The context feature set was restricted to the 5430 words, that were the among the 7000 most frequent words in the corpus, (minus a stoplist of 34 high-frequent function words) AND that occurred at least 50 times in both the Netherlandic and Belgian part of the corpus. The latter was done to make sure that Netherlandic and Belgian type vectors were not dissimilar just because of topical bias from proper names, place names or words relating to local events. Raw co-occurrence frequencies were weighted with Pointwise Mutual Information and negative PMI's were set to zero.",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "Peirsman et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "In a second step, we took a random sample of 100 Netherlandic and a 100 Belgian newspaper issues from the corpus and extracted all occurrences of each of the 476 nouns in the synsets described above. For each occurrence, we built a token-vector by averaging over the type-vectors of the words in a window of 5 words to the left and right of the token. We experimented with two averaging functions. In a first version, we followed Sch\u00fctze (1998) and just summed the type vectors of a token's context words, normalizing by the number of context words for that token:",
"cite_spans": [
{
"start": 430,
"end": 444,
"text": "Sch\u00fctze (1998)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "o w i = n j\u2208C w i c j n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "where o w i is the token vector for the i th occurrence of noun w and C w i is the set of n type vectors c j for the context words in the window around that i th occurrence of noun w. However, this summation means that each first order context word has an equal weight in determining the token vector. Yet, not all first-order context words are equally informative for the meaning of a token. In a sentence like \"While walking to work, the teacher saw a dog barking and chasing a cat\", bark and cat are much more indicative of the meaning of dog than say teacher or work. In a second, weighted version, we therefore increased the contribution of these informative context words by using the first-order context words' PMI values with the noun in the synset. PMI can be regarded as a measure for informativeness and target-noun/context-word PMI-values were available anyway from our large type-level SVS. The PMI of a noun w and a context word c j can now be seen as a weight pmi w c j . In constructing the token vector o w i for the ith occurrence of noun w , we now multiply the type vector c j of each context word with the PMI weight pmi w c j , and then normalize by the sum of the pmi-weights:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "o w i = n j\u2208C w i pmi w c j * c j n j pmi w c j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "The token vectors of all nouns from the same synset were then combined in a token by secondorder-context-feature matrix. Note that this matrix has the same dimensionality as the underlying type-level SVS (5430). By calculating the cosine between all pairs of token-vectors in the matrix, we get the final token-by-token similarity matrix for each of the 218 synsets 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "3 string operations on corpus text files were done with Python 2.7. All matrix calculations were done in Matlab R2009a for Linux",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dutch corpus and synsets",
"sec_num": "3"
},
{
"text": "The token-by-token similarity matrices reflect how the different synonyms carve up the \"semantic space\" of the synset's concept among themselves. However, this information is hard to grasp from a large matrix of decimal figures. One popular way of visualizing a similarity matrix for interpretative purposes is Multidimensional Scaling (Cox and Cox, 2001) . MDS tries to give an optimal 2 or 3 dimensional representation of the similarities (or distances) between objects in the matrix. We applied Kruskal's non-metric Multidimensional Scaling to the all the token-by-token similarity matrices using the isoMDS function in the MASS package of R. Our visualisation software package (see below) forced us to restrict ourselves to a 2 dimensional MDS solution for now, even tough stress levels were generally quite high (0.25 to 0.45). Future implementation may use 3D MDS solutions. Of course, other dimension reduction techniques than MDS exist: PCA is used in Latent Semantic Analysis (Landauer and Dumais, 1997) and has been applied by Sagi et al. (2009) for modeling token semantics. Alternatively, Latent Dirichlect Allocation (LDA) is at the heart of Topic Models (Griffiths et al., 2007) and was adapted by Brody and Lapata (2009) for modeling token semantics. However, these techniques all aim at bringing out a latent structure that abstracts away from the \"raw\" underlying SVS similarities. Our aim, on the other hand, is precisely to investigate how SVSs structure semantics based on contextual distribution properties BEFORE additional latent structuring is applied. We therefore want a 2D representation of the token similarity matrix that is as faithful as possible and that is what MDS delivers 4 .",
"cite_spans": [
{
"start": 336,
"end": 355,
"text": "(Cox and Cox, 2001)",
"ref_id": "BIBREF3"
},
{
"start": 1037,
"end": 1055,
"text": "Sagi et al. (2009)",
"ref_id": "BIBREF18"
},
{
"start": 1168,
"end": 1192,
"text": "(Griffiths et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 1212,
"end": 1235,
"text": "Brody and Lapata (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "In a next step we wanted to intergrate the 2 dimensional MDS plots with different types of meta-data that might be of interest to the lexicologist. Furthermore, we wanted the plots to be interactive, so that a lexicologist can choose which information to visualize in the plot. We opted for the Motion Charts 5 provided by Google Chart Tools 6 , which allows to plot objects with 2D co-ordinates as color-codable and re-sizeable bubbles in an interactive chart. If a timevariable is present, the charts can be made dynamic to show the changing position of the objects in the plot over time 7 . We used the Rpackage googleVis (Gesmann and Castillo, 2011) , an interface between R and the Google Visualisation API, to convert our R datamatrices into Google Motion Charts. The interactive charts, both those based on the weighted and unweighted token-level SVSs, can be explored on our website ( https://perswww. kuleuven.be/\u02dcu0038536/googleVis).",
"cite_spans": [
{
"start": 625,
"end": 653,
"text": "(Gesmann and Castillo, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "To illustrate the information that is available through this visualization, we discuss the weighted chart for the concept COMPUTER SCREEN (Figure 1 shows a screen cap, but we strongly advise to look at the interactive version on the website). In Dutch, this concept can be refered to with (at least) three near-synonyms, which are color coded in the chart: beeldscherm (blue), computerscherm (green) and monitor (yellow). Each bubble in the chart is an occurrence (token) of one these nouns. As Figure 2 shows, roling over the bubbles makes the stretch of text visible in which the noun occurs (These contexts are also available in the lower right side bar). This usagein-context allows the lexicologist to interpret the precise meaning of the occurrence of the noun. The plot itself is a 2D representation of the semantic distances between all tokens (as measured with a token-level SVS) and reflects how the synonyms are distributed over the \"semantic space\". As can be expected with synonyms, they partially populate the same area of the space (the right hand side of the plot). Hovering over the bubbles and looking at the contexts, we can see that they indeed all refer to the concept COMPUTER SCREEN (See example contexts 1 to 3 in Table 2 ). However, we also see that a considerable part on the left hand side of the plot shows no overlap and is only populated by tokens of monitor. Looking more closely kuleuven.be/\u02dcu0038536/committees) 6 (http://code.google.com/apis/chart/ interactive/docs/gallery/motionchart. html) 7 Since we worked with synchronic data, we did not use this feature. However, Motion Charts have been used by Hilpert (http://omnibus.uni-freiburg.de/ mh608/motion.html) to visualize language change in MDS plots of hand coded diachronic linguistic data.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 147,
"text": "(Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 495,
"end": 503,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1238,
"end": 1245,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "at these occurrences, we see that they are instantiations of another meaning of monitor, viz. \"supervisor of youth leisure activities\" (See example context 4 in Table 2 ). Remember that our corpus is stratified for Belgian and Netherlandic Dutch. We can make this stratification visible by changing the color coding of the bubbles to COUNTRY in the top right-hand drop-down menu. Figure 3 shows that the left-hand side, i.e. monitor-only area of the plot, is also an all-Belgian area (hovering over the BE value in the legend makes the Belgian tokens in the plot flash). Changing the color coding to WORDBYCOUNTRY makes this even more clear. Indeed the youth leader meaning of monitor is only familiar to speakers of Belgian Dutch. Changing the color coding to the variable NEWSPAPER shows that the youth leader meaning is also typical for the popular, working class newspapers Het Laatste Nieuws (LN) and Het Nieuwsblad (NB) and is not prevelant in the Belgian high-brow newspapers. In order to provide more structure to the plot, we also experimented with including different K-means clustering solutions (from 2 up to 6 clusters) as colorcodable features, but these seem not very informative yet (but see section 6).",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 380,
"end": 388,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "nr example context 1 De analisten houden met\u00e9\u00e9n oog de computerschermen in de gaten The analists keep one eye on the computer screen 2 Met een digitale camera... kan je je eigen foto op het beeldscherm krijgen With a digital camera, you can get your own photo on the computer screen 3 Met een paar aanpassingen wordt het beeld op de monitoren nog completer With a few adjustments, the image on the screen becomes even more complete 4 Voor augustus zijn de speelpleinen nog op zoek naar monitoren For August, the playgrounds are still looking for supervisors On the whole, the token-level SVS succeeds fairly well in giving an interpretable semantic structure to the tokens and the chart visualizes this. However, SVSs are fully automatic ways of modeling semantics and, not unexpectedly, some tokens are out of place. For example, in the lower left corner of the yellow cluster with monitor tokens referring to youth leader, there is also one blue Netherlandic token of beeldscherm. Thanks to the visualisation, such outliers can easily be detected by the lexicologist who can then report them to the computational linguist. The latter can then try to come up with a model that gives a better fit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "Finally, let us briefly look at the chart of another concept, viz. COLLISION with its near-synonyms aanrijding and botsing. Here, we expect the literal collissions (between cars), for which both nouns can be used, to stand out form the figurative ones (differences in opinion between people), for which only botsing is apropriate in both varieties of Dutch. Figure 4 indeed shows that the right side of the chart is almost exclusively populated by botsing tokens. Looking at their contexts reveals that they indeed overwhelmingly instantiate the metaphorical meaning og collision. Yet also here, there are some \"lost\" aanrijding tokens with a literal meaning and the visualization shows that the current SVS implementation is not yet a fully adequate model for capturing the words' semantics.",
"cite_spans": [],
"ref_spans": [
{
"start": 358,
"end": 366,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "5"
},
{
"text": "Although Vector Spaces have become the mainstay of modeling lexical semantics in current statistical NLP, they are mostly used in a black box way, and how exactly they capture word meaning is not very clear. By visualizing their output, we hope to have at least partially cracked open this black box. Our aim is not just to make SVS output easier to analyze for computer linguists. We also want to make SVSs accessible for lexicologists and lexicographers with an interest in quantitative, empirical data analysis. Such co-operation brings mutual benefits: Computer linguists get access to expert evaluation of their models. Lexicologists and lexicographers can use SVSs to identify preliminary semantic structure based on large quantities of corpus data, instead of heaving to sort through long lists of unstructured examples of a word's usage (the classical concordances). To our knowledge, this paper is one of the first attempts to visualize Semantic Vector Spaces and make them accessible to a non-technical audience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Of course, this is still largely work in progress and a number of improvements and extensions are still possible. First of all, the call-outs for the bubbles in the Google Motion Charts were not designed to contain large stretches of text. Current corpus contexts are therefore to short to ana-lyze the precise meaning of the tokens. One option would be to have pop-up windows with larger contexts appear by clicking on the call-outs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Secondly, we didn't use the motion feature that gave the charts its name. However, if we have diachronic data, we could e.g. track the centroid of a word's tokens in the semantic space through time and at the same time show the dispersion of tokens around that centroid 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Thirdly, in the current implementation, one important aspect of the black-box quality of SVSs is not dealt with: it's not clear which context features cause tokens to be similar in the SVS output, and, consequently, the interpreation of the distances in the MDS plot remains quite obscure. One option would be to use the cluster solutions, that are already available as color codable variables, and indicate the highest scoring context features that the tokens in each cluster have in common. Another option for bringing out sense-distinguishing context words was proposed by Rohrdantz et al. (2011) who use Latent Dirichlet Allocation to structure tokens. The loadings on these latent topics could also be color-coded in the chart.",
"cite_spans": [
{
"start": 576,
"end": 599,
"text": "Rohrdantz et al. (2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Fourthly, we already indicated that two dimensional MDS solutions have quite high stress values and a three dimensional solution would be better to represent the token-by-token similarities. This would require the 3D Charts, which are not currently offered by the Google Chart Tools. However both R and Matlab do have interactive 3D plotting functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Finally, and most importantly, the plots currently do not allow any input from the user. If we want the plots to be the starting point of an indepth semantic analysis, the lexicologist should be able to annotate the occurrences with variables of their own. For example, they might want to code whether the occurrence refers to a laptop screen, a desktop screen or cell phone screen, to find out whether their is a finer-grained division of labor among the synonyms. Additionally, an evaluation of the SVS's performance might include moving wrongly positioned tokens in the plot and thus re-group tokens, based on the lexicologist's insights. Tracking these corrective movements might then be valuable input for the computer linguists to improve their models. Of course, this goes well beyond our rather opportunistic use of the Google Charts Tool. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General discussion",
"sec_num": "6"
},
{
"text": "Stress is a measure for that faithfulness. No such indication is directly available for LSA or LDA. However, we do think LSA and LDA can be used to provide extra structure to our visualizations, see section 6.5 To avoid dependence on commercial software, we also made an implementation based on the plotting options of R and the Python Image Library( https://perswww.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is basically the approach ofSagi et al. (2009) but after LSA and without interactive visualization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How we BLESSed distributional semantic evaluation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GE- ometrical Models of Natural Language Semantics, pages 1-10, Edinburgh, UK. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Brody and Mirella Lapata. 2009. Bayesian Word Sense Induction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 103-111, Athens, Greece. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatically Identifying Changes in the Semantic Orientation of Words",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Cook and Suzanne Stevenson. 2010. Automat- ically Identifying Changes in the Semantic Orien- tation of Words. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), pages 28-34, Valletta, Malta. ELRA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multidimensional Scaling",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cox",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cox",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cox and Michael Cox. 2001. Multidimen- sional Scaling. Chapman & Hall, Boca Raton.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The doctor and the semantician",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
}
],
"year": 2010,
"venue": "Quantitative Methods in Cognitive Semantics: Corpus-Driven Approaches",
"volume": "",
"issue": "",
"pages": "63--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Geeraerts. 2010a. The doctor and the seman- tician. In Dylan Glynn and Kerstin Fischer, edi- tors, Quantitative Methods in Cognitive Semantics: Corpus-Driven Approaches, pages 63-78. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Theories of Lexical Semantics",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Geeraerts. 2010b. Theories of Lexical Semantics. Oxford University Press, Oxford.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using the Google Visualisation API with R: googleVis-0.2.4 Package Vignette",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Gesmann",
"suffix": ""
},
{
"first": "Diego",
"middle": [
"De"
],
"last": "Castillo",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Gesmann and Diego De Castillo. 2011. Using the Google Visualisation API with R: googleVis- 0.2.4 Package Vignette.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Quantitative Methods in Cognitive Semantics: Corpusdriven Approaches",
"authors": [
{
"first": "Dylan",
"middle": [],
"last": "Glynn",
"suffix": ""
},
{
"first": "Kerstin",
"middle": [],
"last": "Fischer",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "46",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dylan Glynn and Kerstin Fischer. 2010. Quanti- tative Methods in Cognitive Semantics: Corpus- driven Approaches, volume 46. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Topics in Semantic Representation",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological Review",
"volume": "114",
"issue": "",
"pages": "211--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths, Mark Steyvers, and Joshua Tenenbaum. 2007. Topics in Semantic Represen- tation. Psychological Review, 114:211-244.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modelling Word Similarity. An Evaluation of Automatic Synonymy Extraction Algorithms",
"authors": [
{
"first": "Kris",
"middle": [],
"last": "Heylen",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Speelman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Language Resources and Evaluation Conference (LREC 2008)",
"volume": "",
"issue": "",
"pages": "3243--3249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kris Heylen, Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2008. Modelling Word Similarity. An Evaluation of Automatic Synonymy Extraction Al- gorithms. In Proceedings of the Language Re- sources and Evaluation Conference (LREC 2008), pages 3243-3249, Marrakech, Morocco. ELRA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction and Representation of Knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "240--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer and Susan T Dumais. 1997. A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction and Rep- resentation of Knowledge. Psychological Review, 104(2):240-411.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Twente Nieuws Corpus (TwNC)",
"authors": [
{
"first": "J F",
"middle": [],
"last": "Roeland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ordelman",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roeland J F Ordelman. 2002. Twente Nieuws Cor- pus (TwNC). Technical report, Parlevink Language Techonology Group. University of Twente.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dependency-based construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Document clustering with committees",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '02",
"volume": "",
"issue": "",
"pages": "199--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Dekang Lin. 2002. Document clus- tering with committees. In Proceedings of the 25th annual international ACM SIGIR conference on Re- search and development in information retrieval, SIGIR '02, pages 199-206, New York, NY, USA. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Size matters: tight and loose context definitions in English word space models",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Heylen",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics",
"volume": "",
"issue": "",
"pages": "34--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Peirsman, Kris Heylen, and Dirk Geeraerts. 2008. Size matters: tight and loose context defini- tions in English word space models. In Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics, pages 34-41, Hamburg, Germany. ESS- LLI.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The automatic identification of lexical variation between language varieties",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Speelman",
"suffix": ""
}
],
"year": 2010,
"venue": "Natural Language Engineering",
"volume": "16",
"issue": "4",
"pages": "469--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2010. The automatic identification of lexical varia- tion between language varieties. Natural Language Engineering, 16(4):469-490.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards Tracking Semantic Change by Visual Analytics",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Rohrdantz",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Hautli",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mayer",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Butt",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Frans",
"middle": [],
"last": "Keim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "305--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Rohrdantz, Annette Hautli, Thomas Mayer, Miriam Butt, Daniel A Keim, and Frans Plank. 2011. Towards Tracking Semantic Change by Vi- sual Analytics. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 305-310, Portland, Oregon, USA, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Aggregating dialectology and typology: linguistic variation in text and speech, within and across languages",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ruette",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Geeraerts",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Speelman",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Ruette, Dirk Geeraerts, Yves Peirsman, and Dirk Speelman. 2012. Semantic weighting mechanisms in scalable lexical sociolectometry. In Benedikt Szmrecsanyi and Bernhard W\u00e4lchli, editors, Aggre- gating dialectology and typology: linguistic vari- ation in text and speech, within and across lan- guages. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic Density Analysis: Comparing Word Meaning across Time and Phonetic Space",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Sagi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kaufmann",
"suffix": ""
},
{
"first": "Brady",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Geometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic Density Analysis: Comparing Word Meaning across Time and Phonetic Space. In Pro- ceedings of the Workshop on Geometrical Mod- els of Natural Language Semantics, pages 104- 111, Athens, Greece. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense dis- crimination. Computational Linguistics, 24(1):97- 124.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From Frequency to Meaning: Vector Space Models of Semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From Fre- quency to Meaning: Vector Space Models of Se- mantics. Journal of Artificial Intelligence Research, 37(1):141-188.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "At Last Parsing Is Now Operational",
"authors": [
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 2006,
"venue": "Verbum Ex Machina. Actes de la 13e conference sur le traitement automatique des langues naturelles (TALN06)",
"volume": "",
"issue": "",
"pages": "20--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertjan van Noord. 2006. At Last Parsing Is Now Operational. In Verbum Ex Machina. Actes de la 13e conference sur le traitement automatique des langues naturelles (TALN06), pages 20-42, Leuven, Belgium. Presses universitaires de Louvain.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Screencap of Motion Chart for COMPUTER SCREEN Figure 2: token of beeldscherm with contextFigure 3: COMPUTER SCREEN tokens stratified by country Figure 4: Screencap of Motion Chart for COLLISION",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "gives some examples.",
"num": null,
"content": "
CONCEPT | nouns in synset |
INFRINGEMENT | inbreuk, overtreding |
GENOCIDE | volkerenmoord, genocide |
POLL | peiling, opiniepeiling, rondvraag |
MARIHUANA | cannabis, marihuana |
COUP | staatsgreep, coup |
MENINGITIS | hersenvliesontsteking, meningitis |
DEMONSTRATOR | demonstrant, betoger |
AIRPORT | vliegveld, luchthaven |
VICTORY | zege, overwinning |
HOMOSEXUAL | homo, homoseksueel, homofiel |
RELIGION | religie, godsdienst |
COMPUTER SCREEN | computerschem, beeldscherm, monitor |
",
"type_str": "table",
"html": null
},
"TABREF1": {
"text": "Dutch synsets (sample) 1 Publication years 1999 up to 2002 of Algemeen Dagblad, NRC, Parool, Trouw and Volkskrant 2 Publication years 1999 up to 2005 of De Morgen, De Tijd, De Standaard, Het Laatste Nieuws, Het Nieuwsblad and Het Belang van Limburg",
"num": null,
"content": "",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Contexts (shown in chart by mouse roll-over)",
"num": null,
"content": "",
"type_str": "table",
"html": null
}
}
}
}