Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:38:18.467024Z"
},
"title": "Distributional Modeling on a Diet: One-shot Word Learning from Text Only",
"authors": [
{
"first": "Su",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Austin",
"location": {}
},
"email": "roller@cs.utexas.edu"
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": "",
"affiliation": {},
"email": "katrin.erk@mail.utexas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We test whether distributional models can do one-shot learning of definitional properties from text only. Using Bayesian models, we find that first learning overarching structure in the known data, regularities in textual contexts and in properties, helps one-shot learning, and that individual context items can be highly informative. Our experiments show that our model can learn properties from a single exposure when given an informative utterance.",
"pdf_parse": {
"paper_id": "I17-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "We test whether distributional models can do one-shot learning of definitional properties from text only. Using Bayesian models, we find that first learning overarching structure in the known data, regularities in textual contexts and in properties, helps one-shot learning, and that individual context items can be highly informative. Our experiments show that our model can learn properties from a single exposure when given an informative utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When humans encounter an unknown word in text, even with a single instance, they can often infer approximately what it means, as in this example from Lazaridou et al. (2014) :",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "Lazaridou et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We found a cute, hairy wampimuk sleeping behind the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "People who hear this sentence typically guess that a wampimuk is an animal, or even that it is a mammal. Distributional models, which describe the meaning of a word in terms of its observed contexts (Turney and Pantel, 2010) , have been suggested as a model for how humans learn word meanings (Landauer and Dumais, 1997) . However, distributional models typically need hundreds of instances of a word to derive a highquality representation for it, while humans can often infer a passable meaning approximation from one sentence only (as in the above example). This phenomenon is known as fast mapping (Carey and Bartlett, 1978) , Our primary modeling objective in this paper is to explore a plausible model for fastmapping learning from textual context.",
"cite_spans": [
{
"start": 199,
"end": 224,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF33"
},
{
"start": 293,
"end": 320,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF19"
},
{
"start": 601,
"end": 627,
"text": "(Carey and Bartlett, 1978)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there is preliminary evidence that fast mapping can be modeled distributionally (Lazaridou et al., 2016) , it is unclear what enables it.",
"cite_spans": [
{
"start": 86,
"end": 110,
"text": "(Lazaridou et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How do humans infer word meanings from so little data? This question has been studied for grounded word learning, when the learner perceives an object in non-linguistic context that corresponds to the unknown word. The literature emphasizes the importance of learning general knowledge or overarching structure, which we define as the information that is learned by accumulation across concepts (e.g. regularities in property co-occurrence), across all concepts (Kemp et al., 2007) , In grounded word learning, overarching structure that has been proposed includes knowledge about which properties. For example knowledge about which properties are most important to object naming (Smith et al., 2002; Colunga and Smith, 2005) , or a taxonomy of concepts (Xu and Tenenbaum, 2007) .",
"cite_spans": [
{
"start": 462,
"end": 481,
"text": "(Kemp et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 680,
"end": 700,
"text": "(Smith et al., 2002;",
"ref_id": "BIBREF29"
},
{
"start": 701,
"end": 725,
"text": "Colunga and Smith, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 754,
"end": 778,
"text": "(Xu and Tenenbaum, 2007)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we study models for fast mapping in word learning 1 from textual context alone, using probabilistic distributional models. Our task differs from the grounded case in that we do not perceive any object labeled by the unknown word. In that context, learning word meaning means learning the associated definitional properties and their weights (see Section 3). For the sake of interpretability, we focus on learning definitional properties We ask what kinds of overarching structure in distributional contexts and in properties will be helpful for one-shot word learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on learning from syntactic context. Distributional representations of syntactic context are directly interpretable as selectional constraints, which in manually created resources are typically characterized through high-level taxonomy classes (Kipper-Schuler, 2005; Fillmore et al., 2003) . So they should provide good evidence for the meaning of role fillers. Also, it has been shown that selectional constraints can be learned distributionally (Erk et al., 2010; \u00d3 S\u00e9aghdha and Korhonen, 2014; Ritter et al., 2010) . However, our point will not be that syntax is needed for fast word learning, but that it helps to observe overarching structure, with syntactic context providing a clear test bed.",
"cite_spans": [
{
"start": 252,
"end": 274,
"text": "(Kipper-Schuler, 2005;",
"ref_id": "BIBREF17"
},
{
"start": 275,
"end": 297,
"text": "Fillmore et al., 2003)",
"ref_id": "BIBREF9"
},
{
"start": 455,
"end": 473,
"text": "(Erk et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 474,
"end": 504,
"text": "\u00d3 S\u00e9aghdha and Korhonen, 2014;",
"ref_id": null
},
{
"start": 505,
"end": 525,
"text": "Ritter et al., 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test two types of overarching structure for their usefulness in fast mapping. First, we hypothesize that it is helpful to learn about commonalities among context items, which enables mapping from contexts to properties. For example the syntactic contexts eat-dobj and cook-dobj should prefer similar targets: things that are cooked are also things that are eaten (Hypothesis H1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second hypothesis is that it will be useful to learn co-occurrence patterns between properties. That is, we hypothesize that in learning an entity is a mammal, we may also infer it is four-legged (Hypothesis H2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We do not intent to make strong cognitive claims, for which additional experimentation will be in order, and we leave this for future work. This work sets its goal on building a plausible computational model that models human fast-mapping in learning (i) well from limited grounded data, (ii) effectively from only one instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fast mapping and textual context. Fast mapping (Carey and Bartlett, 1978) is the human ability to construct provisional word meaning representations after one or few exposures. An important reason for why humans can do fast mapping is that they acquire overarching structure that constrains learning (Smith et al., 2002; Colunga and Smith, 2005; Kemp et al., 2007; Xu and Tenenbaum, 2007; Maas and Kemp, 2009) . In this paper, we ask what forms of overarching structure will be useful for text-based word learning. Lazaridou et al. (2014) consider fast mapping for grounded word learning, mapping image data to distributional representations, which is in a way the mirror image of our task. Lazaridou et al. (2016) were the first to explore fast mapping for text-based word learning, using an extension to word2vec with both textual and visual features. However, they model the unknown word simply by averaging the vectors of known words in the sentence, and do not explore what types of knowl-edge enable fast mapping.",
"cite_spans": [
{
"start": 47,
"end": 73,
"text": "(Carey and Bartlett, 1978)",
"ref_id": "BIBREF3"
},
{
"start": 300,
"end": 320,
"text": "(Smith et al., 2002;",
"ref_id": "BIBREF29"
},
{
"start": 321,
"end": 345,
"text": "Colunga and Smith, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 346,
"end": 364,
"text": "Kemp et al., 2007;",
"ref_id": "BIBREF16"
},
{
"start": 365,
"end": 388,
"text": "Xu and Tenenbaum, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 389,
"end": 409,
"text": "Maas and Kemp, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 515,
"end": 538,
"text": "Lazaridou et al. (2014)",
"ref_id": "BIBREF20"
},
{
"start": 691,
"end": 714,
"text": "Lazaridou et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Definitional properties. Feature norms are definitional properties collected from human participants. Feature norm datasets are available from McRae et al. (2005) and Vigliocco et al. (2004) . In this paper we use feature norms as our target representations of word meaning. There are several recent approaches that learn to map distributional representations to feature norms (Johns and Jones, 2012; Rubinstein et al., 2015; F\u0203g\u0203r\u0203\u015fan et al., 2015; Herbelot and Vecchi, 2015a) . We also map distributional information to feature norms, but we do it based on a single textual instance (one-shot learning).",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF23"
},
{
"start": 167,
"end": 190,
"text": "Vigliocco et al. (2004)",
"ref_id": "BIBREF34"
},
{
"start": 377,
"end": 400,
"text": "(Johns and Jones, 2012;",
"ref_id": "BIBREF14"
},
{
"start": 401,
"end": 425,
"text": "Rubinstein et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 426,
"end": 449,
"text": "F\u0203g\u0203r\u0203\u015fan et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 450,
"end": 477,
"text": "Herbelot and Vecchi, 2015a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In the current paper we use the Quantified McRae (QMR) dataset (Herbelot and Vecchi, 2015b) , which extends the McRae et al. (2005) feature norms by ratings on the proportion of category members that have a property, and the Animal dataset (Herbelot, 2013) , which is smaller but has the same shape. For example, most alligators are dangerous. The quantifiers are given probabilistic interpretations, so if most alligators are dangerous, the probability for a random alligator to be dangerous would be 0.95. This makes this dataset a good fit for our probabilistic distributional model. We discuss QMR and the Animal data further in Section 4.",
"cite_spans": [
{
"start": 63,
"end": 91,
"text": "(Herbelot and Vecchi, 2015b)",
"ref_id": "BIBREF13"
},
{
"start": 112,
"end": 131,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF23"
},
{
"start": 240,
"end": 256,
"text": "(Herbelot, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Bayesian models in lexical semantics. We use Bayesian models for the sake of interpretability and because the existing definitional property datasets are small. The Bayesian models in lexical semantics that are most related to our approach are Dinu and Lapata (2010) , who represent word meanings as distributions over latent topics that approximate senses, and Andrews et al. (2009) and Roller and Schulte im Walde (2013), who use multi-modal extensions of Latent Dirichlet Allocation (LDA) models (Blei et al., 2003) to represent co-occurrences of textual context and definitional features.\u00d3 S\u00e9aghdha (2010) and Ritter et al. (2010) use Bayesian approaches to model selectional preferences.",
"cite_spans": [
{
"start": 244,
"end": 266,
"text": "Dinu and Lapata (2010)",
"ref_id": "BIBREF6"
},
{
"start": 362,
"end": 383,
"text": "Andrews et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 499,
"end": 518,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 614,
"end": 634,
"text": "Ritter et al. (2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this section we develop a series of models to test our hypothesis that acquiring general knowledge is helpful to word learning, in particular knowledge about similarities between context items (H1) and co-occurrences between properties (H2). The count-based model will implement neither hypoth-esis, while the bimodal topic model will implement both. To test the hypotheses separately, we employ two clustering approaches via Bernoulli Mixtures, which we use as extensions to the countbased model and bimodal topic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Independent Bernoulli condition. Let Q be a set of definitional properties, C a set of concepts that the learner knows about, and V a vocabulary of context items. For most of our models, context items w \u2208 V will be predicate-role pairs such as eat-dobj. The task is determine properties that apply to an unknown concept u \u2208 C. Any concept c \u2208 C is associated with a vector c Ind (where \"Ind\" stands for \"independent Bernoulli probabilities\") of |Q| probabilities, where the i-th entry of c Ind is the probability that an instance of concept c would have property q i . These probabilities are independent Bernoulli probabilities. For instance, alligator Ind would have an entry of 0.95 for dangerous. An instance c \u2208 {0, 1} |Q| of a concept c \u2208 C is a vector of zeros and ones drawn from c Ind , where an entry of 1 at position i means that this instance has the property q i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "The model proceeds in two steps. First it learns property probabilities for context items w \u2208 V . The model observes instances c occurring textually with context item w, and learns property probabilities for w, where the probability that w has for a property q indicates the probability that w would appear as a context item with an instance that has property q. In the second step the model uses the acquired context item representations to learn property probabilities for an unknown concept u. When u appears with w, the context item w \"imagines\" an instance (samples it from its property probabilities), and uses this instance to update the property probabilities of u. Instead of making point estimates, the model represents its uncertainty about the probability of a property through a Beta distribution, a distribution over Bernoulli probabilities. As a Beta distribution is characterized by two parameters \u03b1 and \u03b2, we associate each context item w \u2208 V with vectors w \u03b1 \u2208 R |Q| and w \u03b2 \u2208 R |Q| , where the i-th \u03b1 and \u03b2 values are the parameters of the Beta distribution for property q i . When an instance c is observed with context item w, we do a Bayesian update on w simply as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w \u03b1 = w \u03b1 + c w \u03b2 = w \u03b2 + (1 \u2212 c)",
"eq_num": "(1)"
}
],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "because the Beta distribution is the conjugate prior of the Bernoulli. To draw an instance from w, we draw it from the predictive posterior probabilities of its Beta distributions, w Ind = w \u03b1 /(w \u03b1 + w \u03b2 ). Likewise, we associate an unknown concept u with vectors u \u03b1 and u \u03b2 . When the model observes u in the context of w, it draws an instance from w Ind , and performs a Bayesian update as in (1) on the vectors associated with u. After training, the property probabilities for u are again the posterior predictive probabilities u Ind = u \u03b1 /(u \u03b1 +u \u03b2 ). The model can be used for multi-shot learning and oneshot learning in the same way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "Multinomial condition. We also test a multinomial variant of the count-based model, for greater comparability with the LDA model below. Here, the concept representation c Mult is a multinomial distribution over the properties in Q. (That is, all the properties compete in this model.) An instance of concept c is now a single property, drawn from c's multinomial. The representation of a context item w, and also the representation of the unknown concept u, is a Dirichlet distribution with |Q| parameters. Bayesian update of the representation of w based on an occurrence with c, and likewise Bayesian update of the representation of u based on an occurrence with w, is straightforward again, as the Dirichlet distribution is the conjugate prior of the multinomial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "The two count-based models do not implement either of our two hypotheses. They compute separate selectional constraints for each context item, and do not attend to co-occurrences between properties. In the experiments below, the count-based models will be listed as Count Independent and Count Multinomial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Count-based Model",
"sec_num": "3.1"
},
{
"text": "We use an extension of LDA (Blei et al., 2003) to implement our hypotheses on the usefulness of overarching structure, both commonalities in selectional constraints across predicates, and cooccurrence of properties across concepts. In particular, we build on Andrews et al. (2009) in using a bimodal topic model, in which a single topic simultaneously generates both a context item and a property. We further build on Dinu and Lapata (2010) in having a \"pseudo-document\" for each concept c to represent its observed occurrences. In our case, this pseudo-document contains pairs of a context item w \u2208 V and a property q meaning that w has been observed to occur with an instance of c that had q. The generative story is as follows. For each known concept c, draw a multinomial \u03b8 c over topics. For each topic z, draw a multinomial \u03c6 z over context items w \u2208 V , and a multinomial \u03c8 z over properties q \u2208 Q. To generate an entry for c's pseudo-document, draw a topic z \u223c M ult(\u03b8 c ). Then, from z, simultaneously draw a context item from \u03c6 z and a property from \u03c8 z . Figure 1 shows the plate diagram for this model.",
"cite_spans": [
{
"start": 27,
"end": 46,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 259,
"end": 280,
"text": "Andrews et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 418,
"end": 440,
"text": "Dinu and Lapata (2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1066,
"end": 1074,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "\u2208 Q, \u03b1 \u03b8 c z w q \u03c6 z \u03c8 z \u03b2 \u03b3 (w, q) D z z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "To infer properties for an unknown concept u, we create a pseudo-document for u containing just the observed context items, no properties, as those are not observed. From this pseudo-document d u we infer the topic distribution \u03b8 u . Then the probability of a property q given d u is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q|d u ) = z P (z|\u03b8 u )P (q|\u03c8 z )",
"eq_num": "(2)"
}
],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "For the one-shot condition, where we only observe a single context item w with u, this simplifies to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q|w) = z P (z|w)P (q|\u03c8 z )",
"eq_num": "(3)"
}
],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "We refer to this model as bi-TM below. The topics of this model implement our hypothesis H1 by grouping context items that tend to occur with the same concepts and the same properties. The topics also implement our hypothesis H2 by grouping properties that tend to occur with the same concepts and the same context items. By using multinomials \u03c8 z it makes the simplifying assumption that all properties compete, like the Count Multinomial model above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Bimodal Topic Model",
"sec_num": "3.2"
},
{
"text": "With the Count models, we investigate word learning without any overarching structures. With the bi-TMs, we investigate word learning with both types of overarching structures at once. In order to evaluate each of the two hypotheses separately, we use clustering with Bernoulli Mixture models of either the context items or the properties. A Bernoulli Mixture model (Juan and Vidal, 2004) assumes that a population of m-dimensional binary vectors x has been generated by a set of mixture components K, each of which is a vector of m Bernoulli probabilities:",
"cite_spans": [
{
"start": 366,
"end": 388,
"text": "(Juan and Vidal, 2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x) = |K| k=1 p(k)p(x|k)",
"eq_num": "(4)"
}
],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "A Bernoulli Mixture can represent co-occurrence patterns between the m random variables it models without assuming competition between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "To test the effect of modeling cross-predicate selectional constraints, we estimate a Bernoulli Mixture model from n instances w for each w \u2208 V , sampled from w Ind (which is learned as in the Count Independent model). Given a Bernoulli Mixture model of |K| components, we then assign each context item w to its closest mixture component as follows. Say the instances of w used to estimate the Bernoulli Mixture were {w 1 , . . . , w n }, then we assign w to the component",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k w = argmax k n j=1 p(k|w j )",
"eq_num": "(5)"
}
],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "We then re-train the representations of context items in the Count Multinomial condition, treating each occurrence of c with context w as an occurrence of c with k w . This yields a Count Multinomial model called Count BernMix H1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "To test the effect of modeling property co-occurrences, we estimate a |K|-component Bernoulli Mixture model from n instances of each known concept c \u2208 C, sampled from c Ind . We then represent each concept c by a vector c Mult , a multinomial with |K| parameters, as follows. Say the instances of c used to estimate the Bernoulli Mixture were {c 1 , . . . , c n }, then the k-th entry in c Mult is the average probability, over all c i , of being generated by component k:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c k = 1 n n j=1 p(k|c j )",
"eq_num": "(6)"
}
],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "This can be used as a Count Multinomial model where the entries in c Mult stand for Bernoulli Mixture components rather than individual properties. We refer to it as Count BernMix H2. 2 Finally, we extend the bi-TM with the H2 Bernoulli Mixture in the same way as a Count Multinomial model, and list this extension as bi-TM BernMix H2. While the bi-TM already implements both H1 and H2, its assumption of competition between all properties is simplistic, and bi-TM BernMix H2 tests whether lifting this assumption will yield a better model. We do not extend the bi-TM with the H1 Bernoulli Mixture, as the assumption of competition between context items that the bi-TM makes is appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bernoulli Mixtures",
"sec_num": "3.3"
},
{
"text": "Definitional properties. As we use probabilistic models, we need probabilities of properties applying to concept instances. So the QMR dataset (Herbelot and Vecchi, 2015b) is ideally suited. QMR has 532 concrete noun concepts, each associated with a set of quantified properties. The quantifiers have been given probabilistic interpretations, mapping all\u21921, most\u21920.95, some\u21920.35, few\u21920.05, none\u21920. 3 Each concept/property pair was judged by 3 raters. We choose the majority rating when it exists, and otherwise the minimum proposed rating. To address sparseness, especially for the one-shot learning setting, we omit properties that are named for fewer than 5 concepts. This leaves us with 503 concepts and 220 properties We intentionally choose this small dataset: One of our main objectives is to explore the possibility of learning effectively from very limited training data. In addition, while the feature norm dataset is small, our distributional dataset (the BNC, see below) is not. The latter essentially serves as a pivot for us to propagate the knowledge from the feature norm data to the wider semantic space.",
"cite_spans": [
{
"start": 143,
"end": 171,
"text": "(Herbelot and Vecchi, 2015b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "It is a problem of both the original McRae et al. (2005) data and QMR that if a property is not named by participants, it is not listed, even if it applies. For example, the property four-legged 2 We use the H2 Bernoulli Mixture as a soft clustering because it is straightforward to do this through concept representations. For the H1 mixture, we did not see an obvious soft clustering, so we use it as a hard clustering. 3 The dataset also contains KIND properties that do not have probabilistic interpretations. Following Herbelot and Vecchi (2015a) we omit these properties.",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF23"
},
{
"start": 195,
"end": 196,
"text": "2",
"ref_id": null
},
{
"start": 422,
"end": 423,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "is missing for alligator in QMR. So we additionally use the Animal dataset of Herbelot (2013) , where every property has a rating for every concept. The dataset comprises 72 animal concepts with quantification information for 54 properties.",
"cite_spans": [
{
"start": 78,
"end": 93,
"text": "Herbelot (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "Distributional data. We use the British National Corpus (BNC) (The BNC Consortium, 2007) , with dependency parses from Spacy. 4 As context items, we use pairs pred, dep of predicates pred that are content words (nouns, verbs, adjectives, adverbs) but not stopwords, where a concept from the respective dataset (QMR, Animal) is a dependency child of pred via dep. In total we obtain a vocabulary of 500 QMR concepts and 72 Animal concepts that appear in the BNC, and 29,124 context items. We refer to this syntactic context as Syn. For comparison, we also use a baseline model with a bag-of-words (BOW) context window of 2 or 5 words, with stopwords removed.",
"cite_spans": [
{
"start": 62,
"end": 88,
"text": "(The BNC Consortium, 2007)",
"ref_id": "BIBREF31"
},
{
"start": 126,
"end": 127,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "Models. We test our probabilistic models as defined in the previous section. While our focus is on one-shot learning, we also evaluate a multishot setting where we learn from the whole BNC, as a sanity check on our models. (We do not test our models in an incremental learning setting that adds one occurrence at a time. While this is possible in principle, the computational cost is prohibitive for the bi-TM.) We compare to the Partial Least Squares (PLS) model of Herbelot and Vecchi (2015a) 5 to see whether our models perform at state of the art levels. We also compare to a baseline that always predicts the probability of a property to be its relative frequency in the set C of known concepts (Baseline).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "We can directly use the property probabilities in QMR and the Animal data as concept representations c Ind for the Count Independent model. For the Count Multinomial model, we never explicitly compute c Mult . To sample from it, we first sample an instance c \u2208 {0, 1} |Q| from the independent Bernoulli vector of c, c Ind . From the properties that apply to c, we sample one (with equal probabilities) as the observed property. All priors for the count-based models (Beta priors or Dirichlet priors, respectively) are set to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "For the bi-TM, a pseudo-document for a known concept c is generated as follows: Given an occurrence of known concept c with context item w in the BNC, we sample a property q from c (in the same way as for the Count Multinomial model), and add w, q to the pseudo-document for c. For training the bi-TM, we use collapsed Gibbs sampling (Steyvers and Griffiths, 2007) with 500 iterations for burn-in. The Dirichlet priors are uniformly set to 0.1 following Roller and Schulte im Walde (2013). We use 50 topics throughout.",
"cite_spans": [
{
"start": 334,
"end": 364,
"text": "(Steyvers and Griffiths, 2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "For all our models, we report the average performance from 5 runs. For the PLS benchmark, we use 50 components with otherwise default settings, following Herbelot and Vecchi (2015a) .",
"cite_spans": [
{
"start": 154,
"end": 181,
"text": "Herbelot and Vecchi (2015a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "Evaluation. We test all models using 5-fold cross validation and report average performance across the 5 folds. We evaluate performance using Mean Average Precision (MAP) , which tests to what extent a model ranks definitional properties in the same order as the gold data. Assume a system that predicts a ranking of n datapoints, where 1 is the highest-ranked, and assume that each datapoint i has a gold rating of I(i) \u2208 {0, 1}. This system obtains an Average Precision (AP) of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "AP = 1 n i=1 I(i) n i=1 Prec i \u2022 I(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "where Prec i is precision at a cutoff of i. Mean Average Precision is the mean over multiple AP values. In our case, n = |Q|, and we compare a model-predicted ranking of property probabilities with a binary gold rating of whether the property applies to any instances of the given concept. For the one-shot evaluation, we make a separate prediction for each occurrence of an unknown concept u in the BNC, and report MAP by averaging over the AP values for all occurrences of u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental Setup",
"sec_num": "4"
},
{
"text": "Multi-shot learning. While our focus in this paper is on one-shot learning, we first test all models in a multi-shot setting. The aim is to see how well they perform when given ample amounts of training data, and to be able to compare their performance to an existing multi-shot model (as we will not have any related work to compare to for the one-shot setting.) The results are shown in Table 1 , where Syn shows results that use syntactic context (encoding selectional constraints) and BOW5 is a bag-of-words context with a window size of 5. We only compare our models to the baseline and benchmark for now, and do an indepth comparison of our models when we get to the one-shot task, which is our main focus. Across all models, the syntactic context outperforms the bag-of-words context. We also tested a bag-of-words context with window size 2 and found it to have a performance halfway between Syn and BOW5 throughout. This confirms our assumption that it is reasonable to focus on syntactic context, and for the rest of this paper, we test models with syntactic context only.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Focusing on Syn conditions now, we see that almost all models outperform the property frequency baseline, though the MAP scores for the baseline do not fall far behind those of the weakest count-based models. 6 The best of our models perform on par with the PLS benchmark of Herbelot and Vecchi (2015a) on QMR, and on the Animal dataset they outperform the benchmark. Comparing the two datasets, we see that all models show better performance on the cleaner (and smaller) Animal dataset than on QMR. This is probably because QMR suffers from many false negatives (properties that apply but were not mentioned), while Animal does not. The Count Independent model shows similar performance here and throughout all later experiments to the Count Multinomial (even though it matches the construction of the QMR and Animal datasets better), so to avoid clutter we do not report on it further below.",
"cite_spans": [
{
"start": 209,
"end": 210,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "One-shot learning. Table 2 : MAP scores, one-shot learning on the QMR and Animal datasets mance of our models on the one-shot learning task. We cannot evaluate the benchmark PLS as it is not suitable for one-shot learning. The baseline is the same as in Table 1 . The numbers shown are Average Precision (AP) values for learning from a single occurrence. Column all averages over all occurrences of a target in the BNC (using only context items that appeared at least 5 times in the BNC), and column oracle top-20 averages over the 20 context items that have the highest AP for the given target. As can be seen, AP varies widely across sentences: When we average over all occurrences of a target in the BNC, performance is close to baseline level. 7 But the most informative instances yield excellent information about an unknown concept, and lead to MAP values that are much higher than those achieved in multi-shot learning (Table 1) . We explore this more below. Comparing our models, we see that the bi-TM does much better throughout than any of the countbased models. Since the bi-TM model implements both cross-predicate selectional constraints (H1) and property co-occurrence (H2), we find both of our hypotheses confirmed by these results. The Bernoulli mixtures improved performance on the Animal dataset, with no clear pattern of which one improved performance more. On QMR, adding a Bernoulli mixture model harms performance across both the count-based and bi-TM models. We suspect that this is because of the false negative entries in QMR; an inspection of Bernoulli mixture H2 components supports this intuition, as the QMR ones were found to be of poorer quality than those for the Animal data.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 926,
"end": 935,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Comparing Tables 1 and 2 we see that they show 7 Context items with few occurrences in the corpus perform considerably worse than baseline, as their property distributions are dominated by the small number of concepts with which they appear. Table 4 : QMR one-shot: AP for top and bottom 5 context items of gown the same patterns of performance: Models that do better on the multi-shot task also do better on the one-shot task. This is encouraging in that it suggests that it should be possible to build incremental models that do well both in a low-data and an abundant-data setting. Table 3 looks in more detail at what it is that the models are learning by showing the five highestprobability properties they are predicting for the concept gown. The top two entries are multishot models, the third shows the one-shot result from the context item with the highest AP. The bi-TM results are very good in both the multi-shot and the one-shot setting, giving high probability to some quite specific properties like has sleeves. The count-based model shows a clear frequency bias in erroneously giving high probabilities to the two overall most frequent properties, made of metal and an animal. This is due to the additive nature of the Count model: In updating unknown concepts from context items, frequent properties are more likely to be sampled, and their effect accumulates as the model does not take into account interactions among context items. The bi-TM, which models these interactions, is much more robust to the effect of property frequency.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 24,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
},
{
"start": 242,
"end": 249,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 585,
"end": 592,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Informativity. In Table 2 we saw that one-shot performance averaged over all context items in the whole corpus was quite bad, but that good, informative context items can yield high-quality property information. ther. For the concept gown, it shows the five context items that yielded the highest AP values, at the top undo-obj, with an AP as high as 0.7. This raises the question of whether we can predict the informativity of a context item. 8 We test three measures of informativity. The first is simply the frequency of the context item, with the rationale that more frequent context items should have more stable representations. Our second measure is based on entropy. For each context item w, we compute a distribution over properties as in the count-independent model, and measure the entropy of this distribution. If the distribution has few properties account for a majority of the probability mass, then w will have a low entropy, and would be expected to be more informative. Our third measure is based on the same intuition, that items with more \"concentrated\" selectional constraints should be more informative. If a context item w has been observed to occur with known concepts c 1 , . . . , c n , then this measure is the average cosine (AvgCos) of the property distributions (viewed as vectors) of any pair of c i , c j \u2208 {c 1 , . . . , c n }.",
"cite_spans": [
{
"start": 444,
"end": 445,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Count",
"sec_num": null
},
{
"text": "We evaluate the three informativity measures using Spearman's rho to determine the correlation of the informativity of a context item with the AP it produces for each unknown concept. We expect frequency and AvgCos to be positively correlated with AP, and entropy to be negatively correlated with AP. The result is shown in Table 5 . Again, all measures work better on the Animal data than on QMR, where they at best approach significance. The correlation is much better on the bi-TM models than on the count-based models, which is probably due to their higher-quality predictions. Overall, AvgCos emerges as the most robust indicator 8 Lazaridou et al. (2016) , who use a bag-of-words context in one-shot experiments, propose an informativity measure based on the number of contexst that constitute properties. we cannot do that with our syntactic context. Table 6 : QMR, bi-TM, one-shot: MAP by property type over (oracle) top 20 context items for informativity. 9 We now test AvgCos, as our best informativity measure, on its ability to select good context items. The last column of Table 2 shows MAP results for the top 20 context items based on their AvgCos values. The results are much below the oracle MAP (unsurprisingly, given the correlations in Table 5 ), but for QMR they are at the level of the multi-shot results of Table 1, showing that it is possible to some extent to automatically choose informative examples for one-shot learning. Properties by type. McRae et al. (2005) classify properties based on the brain region taxonomy of Cree and McRae (2003) . This enables us to test what types of properties are learned most easily in our fast-mapping setup by computing average AP separately by property type. To combat sparseness, we group property types into five groups, function (the function or use of an entity), taxonomic, visual, encyclopaedic, and other perceptual (e.g., sound). Intuitively, we would expect our contexts to best reflect taxonomic and function properties: Predicates that apply to noun target concepts often express functions of those targets, and manually specified selectional constraints are often characterized in terms of taxonomic classes. Table 6 confirms this intuition. Taxonomic properties achieve the highest MAP by a large margin, followed by functional properties. Visual properties score the lowest.",
"cite_spans": [
{
"start": 635,
"end": 660,
"text": "8 Lazaridou et al. (2016)",
"ref_id": null
},
{
"start": 965,
"end": 966,
"text": "9",
"ref_id": null
},
{
"start": 1470,
"end": 1489,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF23"
},
{
"start": 1548,
"end": 1569,
"text": "Cree and McRae (2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 858,
"end": 865,
"text": "Table 6",
"ref_id": null
},
{
"start": 1086,
"end": 1093,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1256,
"end": 1263,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 2186,
"end": 2193,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Count",
"sec_num": null
},
{
"text": "We have developed several models for one-shot learning word meanings from single textual contexts. Our models were designed learn word properties using distributional contexts (H1) or about co-occurrences of properties (H2). We find evidence that both kinds of general knowledge are helpful, especially when combined (in the bi-TM), or when used on clean property data (in the Animal dataset). We further saw that some contexts are highly informative, and preliminary expirements in informativity measures found that average pairwise similarity of seen role fillers (Avg-Cos) achieves some success in predicting which contexts are most useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we hope to test with other types of general knowledge, including a taxonomy of known concepts (Xu and Tenenbaum, 2007) ; wider-coverage property data (Baroni and Lenci, 2010, Type-DM) ; and alternative modalities (Lazaridou et al., 2016 , image features as \"properties\"). We expect our model will scale to these larger problems easily.",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Xu and Tenenbaum, 2007)",
"ref_id": "BIBREF35"
},
{
"start": 165,
"end": 198,
"text": "(Baroni and Lenci, 2010, Type-DM)",
"ref_id": null
},
{
"start": 228,
"end": 251,
"text": "(Lazaridou et al., 2016",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We would also like to explore better informativity measures and improvements for AvgCos. Knowledge about informative examples can be useful in human-in-the-loop settings, for example a user aiming to illustrate classes in an ontology with a few typical corpus examples. We also note that the bi-TM cannot be used in for truly incremental learning, as the cost of global re-computation after each seen example is prohibitive. We would like to explore probabilistic models that support incremental word learning, which would be interesting to integrate with an overall probabilistic model of semantics (Goodman and Lassiter, 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, we interchangeably use the terms unknown word and unknown concept, as we learn properties, and properties belong to concepts rather than words, and we learn them from text, where we observe words rather than concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spacy.io5 Herbelot and Vecchi (2015a) is the only directly relevant previous work on the subject. Further, to the best of our knowledge, for one-shot property learning from text (only), our work has been the first attempt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is because MAP gives equal credit for all properties correctly predicted as non-zero. When we evaluate with Generalized Average Precision (GAP)(Kishida, 2005), which takes gold weights into account, the baseline model is roughly 10 points below other models. This indicates our models learn approximate property distributions. We omit GAP scores because they correlate strongly with MAP for non-baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tested a binned variant of the frequency measure, on the intuition that medium-frequency context items should be more informative than either highly frequent or rare ones. However, this measure did not show better performance than the non-binned frequency measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by the DARPA DEFT program under AFRL grant FA8750-13-2-0026 and by the NSF CAREER grant IIS 0845925. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the view of DARPA, DoD or the US government. We acknowledge the Texas Advanced Computing Center for providing grid resources that contributed to these results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating experiential and distributional data to learn semantic representations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological Review",
"volume": "116",
"issue": "3",
"pages": "463--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Andrews, Gabriella Vigliocco, and David Vin- son. 2009. Integrating experiential and distribu- tional data to learn semantic representations. Psy- chological Review, 116(3):463-498.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distributional memory: a general framework for corpus-based semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alexandero",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "673--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alexandero Lenci. 2010. Dis- tributional memory: a general framework for corpus-based semantics. Computational Linguis- tics, 36(4):673-721.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "4-5",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(4-5):993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Acquiring a single new word",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Carey",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Bartlett",
"suffix": ""
}
],
"year": 1978,
"venue": "Papers and Reports on Child Language Development",
"volume": "15",
"issue": "",
"pages": "17--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Carey and Elsa Bartlett. 1978. Acquiring a sin- gle new word. Papers and Reports on Child Lan- guage Development, 15:17-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "From the lexicon to expectations about kinds: A role for associative learning",
"authors": [
{
"first": "Eliana",
"middle": [],
"last": "Colunga",
"suffix": ""
},
{
"first": "Linda",
"middle": [
"B"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2005,
"venue": "Psychological Review",
"volume": "112",
"issue": "2",
"pages": "347--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliana Colunga and Linda B. Smith. 2005. From the lexicon to expectations about kinds: A role for asso- ciative learning. Psychological Review, 112(2):347- 382.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns)",
"authors": [
{
"first": "George",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Experimental Psychology: General",
"volume": "132",
"issue": "",
"pages": "163--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George S. Cree and Ken McRae. 2003. Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Jour- nal of Experimental Psychology: General, 132:163- 201.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring distributional similarity in context",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceedings of EMNLP, Cambridge, MA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A flexible, corpus-driven model of regular and inverse selectional preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk, Sebastian Pad\u00f3, and Ulrike Pad\u00f3. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From distributional semantics to feature norms: Grounding semantic models in human perceptual data",
"authors": [
{
"first": "Luana",
"middle": [],
"last": "F\u0203g\u0203r\u0203\u015fan",
"suffix": ""
},
{
"first": "Eva",
"middle": [
"Maria"
],
"last": "Vecchi",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of IWCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luana F\u0203g\u0203r\u0203\u015fan, Eva Maria Vecchi, and Stephen Clark. 2015. From distributional semantics to fea- ture norms: Grounding semantic models in human perceptual data. In Proceedings of IWCS, London, Great Britain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Background to FrameNet",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Petruck",
"suffix": ""
}
],
"year": 2003,
"venue": "International Journal of Lexicography",
"volume": "16",
"issue": "",
"pages": "235--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. J. Fillmore, C. R. Johnson, and M. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16:235-250.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probabilistic semantics and pragmatics: Uncertainty in language and thought",
"authors": [
{
"first": "D",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lassiter",
"suffix": ""
}
],
"year": 2014,
"venue": "Handbook of Contemporary Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah D. Goodman and Daniel Lassiter. 2014. Prob- abilistic semantics and pragmatics: Uncertainty in language and thought. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics. Wiley-Blackwell.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "What is in a text, what isn't and what this has to do with lexical semantics. Proceedings of IWCS",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot. 2013. What is in a text, what isn't and what this has to do with lexical semantics. Pro- ceedings of IWCS.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a shared world:mapping distributional to modeltheoretic semantic spaces",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Vecchi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Eva Vecchi. 2015a. Building a shared world:mapping distributional to model- theoretic semantic spaces. In Proceedings of EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Linguistic Issues in Language",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Eva",
"middle": [
"Maria"
],
"last": "Vecchi",
"suffix": ""
}
],
"year": 2015,
"venue": "Technology",
"volume": "12",
"issue": "4",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Eva Maria Vecchi. 2015b. Many speakers, many worlds. Linguistic Issues in Lan- guage Technology, 12(4):1-20.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Perceptual inference through global lexical similarity",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Michael N Jones",
"middle": [],
"last": "Johns",
"suffix": ""
}
],
"year": 2012,
"venue": "Topics in Cognitive Science",
"volume": "4",
"issue": "",
"pages": "103--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan T Johns and Michael N Jones. 2012. Percep- tual inference through global lexical similarity. Top- ics in Cognitive Science, 4(1):103-120.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bernoulli mixture models for binary images",
"authors": [
{
"first": "Alfons",
"middle": [],
"last": "Juan",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ICPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfons Juan and Enrique Vidal. 2004. Bernoulli mix- ture models for binary images. In Proceedings of ICPR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning overhypotheses with hierarchical Bayesian models",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Perfors",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Developmental Science",
"volume": "10",
"issue": "3",
"pages": "307--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Kemp, Amy Perfors, and Joshua B. Tenen- baum. 2007. Learning overhypotheses with hier- archical Bayesian models. Developmental Science, 10(3):307-321.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "VerbNet: A broadcoverage, comprehensive verb lexicon",
"authors": [
{
"first": "Karin",
"middle": [],
"last": "Kipper",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper-Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. Ph.D. thesis, Computer and Information Science Dept., Univer- sity of Pennsylvania, Philadelphia, PA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments",
"authors": [
{
"first": "Kazuaki",
"middle": [],
"last": "Kishida",
"suffix": ""
}
],
"year": 2005,
"venue": "NII Technical Reports",
"volume": "",
"issue": "14",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. NII Technical Reports, 2005(14):1-19.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "",
"issue": "",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Landauer and Susan Dumais. 1997. A solution to Plato's problem: The latent semantic analysis the- ory of acquisition, induction, and representation of knowledge. Psychological Review, pages 211-240.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? Cross-modal map- ping between distributional semantics and the visual world. In Proceedings of ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multimodal word meaning induction from minimal exposure to natural text",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2016,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2016. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science, pages 1-30.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "One-shot learning with Bayesian networks",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kemp",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 31st Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas and Charles Kemp. 2009. One-shot learning with Bayesian networks. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, Amsterdam, The Netherlands.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semantic feature production norms for a large set of living and nonliving things",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "George",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavior Research Methods",
"volume": "37",
"issue": "4",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris McNorgan. 2005. Semantic feature produc- tion norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547- 559.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Latent variable models of selectional preference",
"authors": [
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2010. Latent variable models of selectional preference. In Proceedings of ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Probabilistic distributional semantics with latent variable models",
"authors": [
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "3",
"pages": "587--631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Anna Korhonen. 2014. Probabilistic distributional semantics with latent variable models. Computational Linguistics, 40(3):587-631.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Latent Dirichlet Allocation method for selectional preferences",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Mausam, and Oren Etzioni. 2010. A La- tent Dirichlet Allocation method for selectional pref- erences. In Proceedings of ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A multimodal lda model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal lda model integrating textual, cognitive and visual modalities. In Proceedings of EMNLP.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "How well do distributional models capture different types of semantic knowledge?",
"authors": [
{
"first": "Dana",
"middle": [],
"last": "Rubinstein",
"suffix": ""
},
{
"first": "Effi",
"middle": [],
"last": "Levi",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "2",
"issue": "",
"pages": "726--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dana Rubinstein, Effi Levi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional mod- els capture different types of semantic knowledge? In Proceedings of ACL, volume 2, pages 726-730.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Object name learning provides on-the-job training for attention",
"authors": [
{
"first": "Linda",
"middle": [
"B"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"S"
],
"last": "Jones",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Landau",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Gershkoff-Stowe",
"suffix": ""
},
{
"first": "Larissa",
"middle": [],
"last": "Samuelson",
"suffix": ""
}
],
"year": 2002,
"venue": "Psychological Science",
"volume": "13",
"issue": "1",
"pages": "13--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linda B. Smith, Susan S. Jones, Barbara Landau, Lisa Gershkoff-Stowe, and Larissa Samuelson. 2002. Object name learning provides on-the-job training for attention. Psychological Science, 13(1):13-19.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Probabilistic topic models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "Handbook of Latent Semantic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. In T. Landauer, D.S. McNamara, S. Dennis, and W. Kintsch, eds., Handbook of Latent Semantic Analysis.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The British National Corpus",
"authors": [
{
"first": "Bnc",
"middle": [],
"last": "The",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Consortium",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The BNC Consortium. 2007. The British Na- tional Corpus, version 3 (BNC XML Edition).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Representing the meanings of object and action words: The featural and unitary semantic space hypothesis",
"authors": [
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vinson",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Merrill",
"middle": [],
"last": "Garrett",
"suffix": ""
}
],
"year": 2004,
"venue": "Cognitive Psychology",
"volume": "48",
"issue": "",
"pages": "422--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriella Vigliocco, David Vinson, William Lewis, and Merrill Garrett. 2004. Representing the meanings of object and action words: The featural and unitary semantic space hypothesis. Cognitive Psychology, 48:422-488.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Word learning as Bayesian inference",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological Review",
"volume": "114",
"issue": "2",
"pages": "245--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xu and Joshua B. Tenenbaum. 2007. Word learn- ing as Bayesian inference. Psychological Review, 114(2):245-272.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Plate diagram for the Bimodal Topic Model (bi-TM)",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "MAP scores, multi-shot learning on the QMR and Animal datasets",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td>shows the perfor-</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "QMR: top 5 properties of gown.",
"content": "<table><tr><td>Top 2</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "illustrates this point fur-TM BernMix H2 0.23 \u2022 -0.37 \u2022 0.52*",
"content": "<table><tr><td/><td>Model</td><td>Freq.</td><td colspan=\"2\">Entropy AvgCos</td></tr><tr><td/><td>Count Mult.</td><td>0.09</td><td>-0.12</td><td>0.18</td></tr><tr><td>QMR</td><td colspan=\"2\">Count BernMix H1 0.07 Count BernMix H2 0.10 bi-TM plain 0.15 bi-TM BernMix H2 0.16</td><td>-0.10 -0.09 -0.09 -0.10</td><td>0.17 0.17 0.41 \u2022 0.39 \u2022</td></tr><tr><td>Ani.</td><td>bi-TM plain bi-</td><td>0.25</td><td>-0.40</td><td>0.49*</td></tr></table>",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "Correlation of informativity with AP, Spearman's \u03c1. * and \u2022 indicate significance at p < 0.05 and p < 0.1",
"content": "<table/>",
"html": null
}
}
}
}