Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:43.589228Z"
},
"title": "Diachrony-aware Induction of Binary Latent Representations from Typological Features",
"authors": [
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University Yoshida-honmachi",
"location": {
"addrLine": "Sakyo-ku",
"postCode": "606-8501",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "murawaki@i.kyoto-u.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Although features of linguistic typology are a promising alternative to lexical evidence for tracing evolutionary history of languages, a large number of missing values in the dataset pose serious difficulties for statistical modeling. In this paper, we combine two existing approaches to the problem: (1) the synchronic approach that focuses on interdependencies between features and (2) the diachronic approach that exploits phylogeneticallyand/or spatially-related languages. Specifically, we propose a Bayesian model that (1) represents each language as a sequence of binary latent parameters encoding inter-feature dependencies and (2) relates a language's parameters to those of its phylogenetic and spatial neighbors. Experiments show that the proposed model recovers missing values more accurately than others and that induced representations retain phylogenetic and spatial signals observed for surface features.",
"pdf_parse": {
"paper_id": "I17-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Although features of linguistic typology are a promising alternative to lexical evidence for tracing evolutionary history of languages, a large number of missing values in the dataset pose serious difficulties for statistical modeling. In this paper, we combine two existing approaches to the problem: (1) the synchronic approach that focuses on interdependencies between features and (2) the diachronic approach that exploits phylogeneticallyand/or spatially-related languages. Specifically, we propose a Bayesian model that (1) represents each language as a sequence of binary latent parameters encoding inter-feature dependencies and (2) relates a language's parameters to those of its phylogenetic and spatial neighbors. Experiments show that the proposed model recovers missing values more accurately than others and that induced representations retain phylogenetic and spatial signals observed for surface features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Features of linguistic typology such as basic word order (examples are SVO and SOV) and the presence or absence of tone constitute a promising resource that can potentially be used to uncover the evolutionary history of languages. It has been argued that in exceptional cases, typological features can reflect a time span of 10,000 years or more (Nichols, 1994) . Since typological features, by definition, allow us to compare an arbitrary pair of languages, they can be seen as the last hope for language isolates and tiny language families such as Ainu, Basque, and Japanese, for which lexicon-based historical-comparative lin-guistics 1 has failed to identify genetic relatives. Fortunately, the publication of a large typology database (Haspelmath et al., 2005) made it possible to take computational approaches to this area of study (Daum\u00e9 III and Campbell, 2007) . Murawaki (2015) pursued a pipeline approach to utilizing typological features for phylogenetic inference. Exploiting interdependencies found among features, Murawaki (2015) first mapped each language, represented as a sequence of surface features, into a sequence of continuous latent components. It was in this continuous space that phylogenetic relations among languages were subsequently inferred. Murawaki (2015) argued that since the conversion and the resulting latent representations were designed to reflect typological naturalness, reconstructed ancestral languages were also likely to be typologically natural.",
"cite_spans": [
{
"start": 346,
"end": 361,
"text": "(Nichols, 1994)",
"ref_id": "BIBREF28"
},
{
"start": 740,
"end": 765,
"text": "(Haspelmath et al., 2005)",
"ref_id": "BIBREF15"
},
{
"start": 838,
"end": 868,
"text": "(Daum\u00e9 III and Campbell, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 871,
"end": 886,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
},
{
"start": 1028,
"end": 1043,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
},
{
"start": 1272,
"end": 1287,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, however, we show that Murawaki (2015) rests on fragile underpinnings so that they need to be rebuilt. One of the most important problems underestimated by Murawaki (2015) is an alarmingly large number of missing values. The dataset is a matrix where languages are represented as rows and features as columns, but only less than 30% of the items are present after a modest preprocessing. What is worse, the situation is unlikely to change in the foreseeable future because of the thousands of languages in the world, there is ample documentation for only a handful. These missing values pose serious difficulties for statistical modeling. Ignoring uncertainty in data, however, Murawaki (2015) relied on point estimates of missing values provided by an existing method of imputation when inducing latent representations. In this paper, we take a Bayesian approach because it is known for its robustness in modeling uncertainties. We demonstrate that we can jointly infer missing values and latent representations.",
"cite_spans": [
{
"start": 170,
"end": 185,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another question left unanswered is how good the induced representations are. In this paper, we present two quantitative analyses of the induced representations. The first one is rather indirect: we measure how well a model recovers missing values, with the assumption that good representations must capture regularity in surface features. We show that the proposed method outperformed the pipelined imputation method of Murawaki (2015) among others.",
"cite_spans": [
{
"start": 421,
"end": 436,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second analysis involves geography. It is well known that the values of a surface feature do not distribute randomly in the world but reflect vertical (phylogenetic) transmissions from parents to children and horizontal (spatial or areal) transmissions between populations (Nichols, 1992) . For example, languages of Mainland Southeast Asia are known for having similar tone systems even though they belong to different language families. To measure the degrees of the two modes of transmissions, we use an autologistic model that investigates dependencies among languages (Towner et al., 2012; Yamauchi and Murawaki, 2016) . Since it requires the input to be discrete, we evaluate a new model that focuses on inter-feature dependencies in the same way as Murawaki (2015) but induces binary latent representations. We show that vertical and horizontal signals observed for surface features largely vanish from latent representations when only inter-feature dependencies are exploited. Although not directly applicable to the model of Murawaki (2015), our results suggest that the pipeline approach suffers from noise during phylogenetic inference. To address this problem, we extend the induction model to incorporate the autologistic model at the level of latent representations, rather than surface features. With this integrated model, we manage to let induced representations retain surface signals.",
"cite_spans": [
{
"start": 277,
"end": 292,
"text": "(Nichols, 1992)",
"ref_id": "BIBREF27"
},
{
"start": 577,
"end": 598,
"text": "(Towner et al., 2012;",
"ref_id": "BIBREF34"
},
{
"start": 599,
"end": 627,
"text": "Yamauchi and Murawaki, 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the end, the Bayesian generative model we propose induces binary latent representations by combining inter-feature dependencies and interlanguage dependencies, with primacy given to the former ( Figure 1 ). Whereas inter-feature dependencies are synchronic in nature, inter-language dependencies reflect diachrony. Thus we call the integrated model diachrony-aware induction.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to space limitation, we had to put technical details into the supplementary material. However, we would like to stress that the proposed model works only if it is armed with statistical techniques rarely found in the NLP literature. Together with missing values and binary representations, a large number of continuous variables that connect binary representations to surface features need to be inferred. Unfortunately, a na\u00efve Metropolis-Hastings algorithm does not converge within realistic time scales. We solve this problem by adopting Hamiltonian Monte Carlo (Neal, 2011) since it enables us to efficiently sample a large number of continuous variables at once. Likewise, the autologistic model contains an intractable normalization term, which prevents the application of the standard Metropolis-Hastings sampler. We use an approximate sampler instead (Liang, 2010 Greenberg (1963) proposed dozens of such patterns known as linguistic universals. A statistical model for discovering Greenbergian universals was presented by Daum\u00e9 III and Campbell (2007) . Itoh and Ueda (2004) used the Ising model to model the interac-tion between features. Although these studies entirely focused on surface patterns, they imply the presence of some latent structure behind these surface features. Some generative linguists argue for the existence of binary latent parameters behind surface features although they are controversial even among generative linguists (Boeckx, 2014) . We borrow the term parameter from generative linguistics because the name of feature is reserved for surface variables.",
"cite_spans": [
{
"start": 863,
"end": 875,
"text": "(Liang, 2010",
"ref_id": "BIBREF20"
},
{
"start": 876,
"end": 892,
"text": "Greenberg (1963)",
"ref_id": "BIBREF13"
},
{
"start": 1035,
"end": 1064,
"text": "Daum\u00e9 III and Campbell (2007)",
"ref_id": "BIBREF9"
},
{
"start": 1067,
"end": 1087,
"text": "Itoh and Ueda (2004)",
"ref_id": "BIBREF18"
},
{
"start": 1460,
"end": 1474,
"text": "(Boeckx, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parameters are part of the principles and parameters (P&P) framework (Chomsky and Lasnik, 1993) , where, the structure of a language is explained by (1) a set of universal principles that are common to all languages and (2) a set of parameters whose values vary among languages. Here we skip the former since our focus is on structural variability. According to P&P, if we set specific values to all the parameters, then we obtain a specific language. Each parameter is binary and, in general, sets the values of multiple surface features in a deterministic manner. For example, the head directionality parameter is either head-initial or head-final. If head-initial is chosen, then surface features are set to VO, NA and Prepositions; otherwise the language in question becomes OV, AN and Postpositions (Baker, 2002) . Baker (2002) discussed a number of parameters such as head directionality, polysynthesis, and topic prominent parameters.",
"cite_spans": [
{
"start": 69,
"end": 95,
"text": "(Chomsky and Lasnik, 1993)",
"ref_id": "BIBREF7"
},
{
"start": 783,
"end": 817,
"text": "AN and Postpositions (Baker, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Partly inspired by the P&P framework, we use a sequence of binary variables as the latent representation of a language. However, there are non-negligible differences between P&P and ours, which are discussed in Section S.2 of the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "What the structure behind surface features looks like is almost exclusively discussed by generative linguists, but it should be noted that they are not the only group who attempts to explain surface patterns. Roughly speaking, generative linguists are part of the synchronist camp, as contrasted with diachronists, who consider that at least some patterns observed in surface features arise from common paths of diachronic development (Anderson, 2016 ). An important factor of diachronic development is grammaticalization, by which content words change into function words (Heine and Kuteva, 2007) . For example, the correlation be-tween the order of adposition and noun and the order of genitive and noun might be explained by the fact that adpositions often derive from nouns.",
"cite_spans": [
{
"start": 435,
"end": 450,
"text": "(Anderson, 2016",
"ref_id": "BIBREF0"
},
{
"start": 573,
"end": 597,
"text": "(Heine and Kuteva, 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The standard model for phylogenetic inference is the tree model, where a trait is passed on from a parent to a child with occasional modifications. In fact, the recent success in the applications of statistical models to historical linguistic problems is largely attributed to the tree model (Gray and Atkinson, 2003; Bouckaert et al., 2012) . In linguistic typology, however, a non-tree-like mode of evolution has emerged as one of the central topics (Trubetzkoy, 1928; Campbell, 2006) . Typological features, like loanwords, can be borrowed from one language to another, and as a result, vertical (phylogenetic) signals are obscured by horizontal (spatial) transmission.",
"cite_spans": [
{
"start": 292,
"end": 317,
"text": "(Gray and Atkinson, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 318,
"end": 341,
"text": "Bouckaert et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 452,
"end": 470,
"text": "(Trubetzkoy, 1928;",
"ref_id": "BIBREF35"
},
{
"start": 471,
"end": 486,
"text": "Campbell, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "2.2"
},
{
"text": "The task of incorporating both vertical and horizontal transmissions within a statistical model of evolution is notoriously challenging because of the excessive flexibility of horizontal transmissions. This is the reason why previously proposed models are coupled with some very strong assumptions, for example, that a reference tree is given a priori (Nelson-Sathi et al., 2010) , and that horizontal transmissions can be modeled through timeinvariant areal clusters (Daum\u00e9 III, 2009) .",
"cite_spans": [
{
"start": 352,
"end": 379,
"text": "(Nelson-Sathi et al., 2010)",
"ref_id": "BIBREF26"
},
{
"start": 468,
"end": 485,
"text": "(Daum\u00e9 III, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "2.2"
},
{
"text": "Consequently, we pursue a line of research in linguistic typology that draws on information on the current distribution of typological features without explicitly requiring the reconstruction of previous states (Nichols, 1992 (Nichols, , 1995 Parkvall, 2008; Wichmann and Holman, 2009) . The basic assumption is that if the feature in question is vertically stable, then a phylogenetically defined group of languages will tend to share the same value. Similarly, if the feature in question is horizontally diffusible, then spatially close languages would be expected to frequently share the same feature value. Since the current distribution of typological features is more or less affected by these factors, we need to disentangle the effects of each of these factors. To do this, Yamauchi and Murawaki (2016) adopted a variant of the autologistic model, which had been widely used to model the spatial distribution of a feature (Besag, 1974; Towner et al., 2012) . The model was also used to impute missing values because the phylogenetic and spatial neighbors of a language had some predictive power over its feature values.",
"cite_spans": [
{
"start": 211,
"end": 225,
"text": "(Nichols, 1992",
"ref_id": "BIBREF27"
},
{
"start": 226,
"end": 242,
"text": "(Nichols, , 1995",
"ref_id": "BIBREF29"
},
{
"start": 243,
"end": 258,
"text": "Parkvall, 2008;",
"ref_id": "BIBREF30"
},
{
"start": 259,
"end": 285,
"text": "Wichmann and Holman, 2009)",
"ref_id": "BIBREF36"
},
{
"start": 930,
"end": 943,
"text": "(Besag, 1974;",
"ref_id": "BIBREF2"
},
{
"start": 944,
"end": 964,
"text": "Towner et al., 2012)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "2.2"
},
{
"text": "The dataset we used in the present study is the online edition 2 of the World Atlas of Language Structures (WALS) (Haspelmath et al., 2005) . While Greenberg (1963) and generative linguists have manually induced patterns and parameters, WALS makes it possible to take computational approaches to modeling features (Daum\u00e9 III and Campbell, 2007; Daum\u00e9 III, 2009; Murawaki, 2015; Takamura et al., 2016; Murawaki, 2016) .",
"cite_spans": [
{
"start": 114,
"end": 139,
"text": "(Haspelmath et al., 2005)",
"ref_id": "BIBREF15"
},
{
"start": 148,
"end": 164,
"text": "Greenberg (1963)",
"ref_id": "BIBREF13"
},
{
"start": 314,
"end": 344,
"text": "(Daum\u00e9 III and Campbell, 2007;",
"ref_id": "BIBREF9"
},
{
"start": 345,
"end": 361,
"text": "Daum\u00e9 III, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 362,
"end": 377,
"text": "Murawaki, 2015;",
"ref_id": "BIBREF22"
},
{
"start": 378,
"end": 400,
"text": "Takamura et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 401,
"end": 416,
"text": "Murawaki, 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "WALS is essentially a matrix where languages are represented as rows and features as columns. As of 2017, it contained 2,679 languages and 192 surface features. It covered less than 15% of items in the matrix, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "We removed sign languages, pidgins and creoles from the matrix. We imputed some missing values that could trivially be inferred from other features. We then removed features that covered less than 10% of the languages. After the preprocessing, the number of languages L was 2,607 while the number of features N was reduced to 104. The coverage went up to 26.9%, but the rate was still alarmingly low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "In WALS, languages are accompanied by additional information. We used the following fields to model inter-language dependencies. (1) genera, the lower of the two-level phylogenetic groupings, and (2) single-point geographical coordinates (longitude and latitude). By connecting every pair of languages within a genus, we constructed a phylogenetic neighbor graph. A spatial neighbor graph was constructed by linking all language pairs that were located within a distance of R = 1000 km. On average, each language had 30.8 and 89.1 neighbors, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "The features in WALS are categorical. For example, Feature 81A, \"Order of Subject, Object and Verb\" has seven possible values: SOV, SVO, VSO, VOS, OVS, OSV and No dominant order, and each language incorporates one of these seven values. For each language, we arranged its features into a sequence. A sequence of categorical features can alternatively be represented as a binary sequence using the 1-of-F i coding scheme: Feature i with F i possible values was converted into F i binary items among which only one item takes 1. The number of binarized features M was 723.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "L # of languages K # of parameters M # of binarized features N # of categorical features Z \u2208 {0, 1} L\u00d7K Binary parameter matrix W \u2208 R K\u00d7M Weight matrix \u0398 \u2208 R L\u00d7M Feature score matrix \u0398 \u2208 [0, 1] L\u00d7M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "Feature probability matrix X \u2208 N L\u00d7N Categorical feature matrix ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "Since the proposed model is rather complicated, we present two key components before going into the integrated model. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": "(1, 1), \u2022 \u2022 \u2022, (1, F 1 ), (2, 1), \u2022 \u2022 \u2022, (i, j), \u2022 \u2022 \u2022, (N, F N ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": "where (i, j) points to feature i's j-th value. Then they are given the flat index 1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": "\u2022 \u2022 \u2022 , m, \u2022 \u2022 \u2022 , M (M = N i=1 F i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": ". Two indices are mapped by the function f (i, j) = m. We need the flat representation because that is what latent parameters work on. A parameter is expected to capture the relation between one feature's particular value (e.g., VO for the order of object and verb) and another feature's particular value (NA for the order of adjective and noun). Figure 2 illustrates how surface features are generated from binary latent parameters. We use matrix factorization (Srebro et al., 2005; Griffiths and Ghahramani, 2011) to capture inter-feature dependencies. Since categorical feature matrix X cannot directly be decomposed into two, we first construct (unnormalized) feature score matrix\u0398 and then stochastically generate X using\u0398.",
"cite_spans": [
{
"start": 462,
"end": 483,
"text": "(Srebro et al., 2005;",
"ref_id": "BIBREF32"
},
{
"start": 484,
"end": 515,
"text": "Griffiths and Ghahramani, 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": "\u0398 is a product of binary parameter matrix Z and weight matrix W . The generation of Z will be described in Section 4.2. 3 Each item of\u0398,\u03b8 l,m , is a score for language l's m-th binarized feature. It is affected only by parameters with z l,k = 1 becaus\u1ebd \u2026 languages categorical features 1 2 1 3 2 1 4 1 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 2 3 3 Figure 2 : Stochastic parameter-to-feature generation.\u0398 = ZW encodes inter-feature dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 352,
"text": "\u2026 languages categorical features 1 2 1 3 2 1 4 1 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 2 3 3",
"ref_id": "TABREF1"
},
{
"start": 353,
"end": 361,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 l,m = K k=1 z l,k w k,m .",
"eq_num": "(1)"
}
],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "1 \u2026 1 0 1 0 \u2026 1 1 0 0 \u2026 0 1 0 1 \u2026 1 0 1 0 \u2026 \u2026 \u2026 \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "We locally apply normalization to\u0398 to obtain \u0398, in which \u03b8 l,i,j is the probability of language l taking value j for categorical feature i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 l,i,j = exp(\u03b8 l,f (i,j) ) j exp(\u03b8 l,f (i,j ) ) .",
"eq_num": "(2)"
}
],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "Finally, language l's i-th categorical feature, x l,i , is generated from this distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x l,i | z l, * , W ) = \u03b8 l,i,x l,i ,",
"eq_num": "(3)"
}
],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "z l, * = (z l,1 , \u2022 \u2022 \u2022 , z l,K ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "Combining Eqs. 1and 2, we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 l,i,j \u221d exp( K k=1 z l,k w k,f (i,j) ) = K k=1 exp(z l,k w k,f (i,j) ).",
"eq_num": "(4)"
}
],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "We can see from Eq. (4) that this is a productof-experts model (Hinton, 2002) . If z l,k = 0, parameter k has no effect on \u03b8 l,i,j because exp(z l,k w k,f (i,j) ) = 1. Otherwise, if w k,f (i,j) > 0, it makes \u03b8 l,i,j larger, and if w k,f (i,j) < 0, it lowers \u03b8 l,i,j . Suppose that for parameter k, a certain group of languages takes z l,k = 1. If two categorical feature values (i 1 , j 1 ) and (i 2 , j 2 ) have positive weights (i.e., w k,f (i 1 ,j 1 ) > 0 and w k,f (i 2 ,j 2 ) > 0), the pair must often co-occur in these languages. Likewise, the fact that two feature values do not co-occur can be encoded as a positive weight for one value and a negative weight for the other.",
"cite_spans": [
{
"start": 63,
"end": 77,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-feature Dependencies",
"sec_num": "4.1"
},
{
"text": "The autologistic model is used to generate each column of Z, z * ,k = (z 1,k , \u2022 \u2022 \u2022 , z L,k ). To construct the model, we use two neighbor graphs and the corresponding three counting functions, as illustrated in Figure 3 . V (z * ,k ) returns the number of pairs sharing the same value in the phylogenetic neighbor graph, and H(z * ,k ) is the spatial equivalent of V (z * ,k ). U (z * ,k ) gives the number of languages that take the value 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "We now introduce the following variables: vertical stability v k > 0, horizontal diffusibility h k > 0, and universality \u2212\u221e < u k < \u221e for each feature k. Then the probability of z * .k conditioned on v k , h k and u k is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "P (z * ,k | v k , h k , u k ) = exp v k V (z * ,k ) + h k H(z * ,k ) + u k U (z * ,k ) z * ,k exp v k V (z * ,k ) + h k H(z * ,k ) + u k U (z * ,k ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "The denominator is a normalization term, ensuring that the sum of the distribution equals one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "The autologistic model can be interpreted in terms of the competition associated with possible assignments of z * ,k for the probability mass 1. If a given value, z * ,k , has a relatively large V (z * ,k ), then setting a large value for v k enables it to appropriate fractions of the mass from its weaker rivals. However, if too large a value is set for v k , then it will be overwhelmed by its stronger rivals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "To acquire further insights into the model, let us consider the probability of language l taking value b \u2208 {0, 1}, conditioned on the rest of the languages, z \u2212l,k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (z l,k = b | z \u2212l,k , v k , h k , u k ) \u221d exp (v k V l,k,b + h k H l,k,b + u k b) ,",
"eq_num": "(5)"
}
],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "where V l,k,b is the number of language l's phylogenetic neighbors that assume value b, and H l,k,b is its spatial counterpart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "P (z l,k = b | z \u2212l,k , v k , h k , u k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "is expressed by the weighted linear combination of the three factors in the logspace. It will increase with a rise in the number of phylogenetic neighbors that assume value b. However, this probability depends not only on the phy- logenetic neighbors of language l, but it also depends on its spatial neighbors and on universality. How strongly these factors affect the stochastic selection is controlled by v k , h k , and u k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "1 0 0 1 z * ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-language Dependencies",
"sec_num": "4.2"
},
{
"text": "Now we complete the generative model by integrating the two types of dependencies. The joint distribution is defined as P (A, Z, W, X) = P (A)P (Z|A)P (W )P (X|Z, W ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "where hyperparameters are omitted for brevity and A is a set of latent variables that control the generation of Z:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "P (A) = K k=1 P (v k )P (h k )P (u k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "Their prior distributions are: v k \u223c Gamma(\u03ba, \u03b8), h k \u223c Gamma(\u03ba, \u03b8), and u k \u223c N (0, \u03c3 2 ). 4 Next, z * ,k 's are generated as described in Section 4.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "P (Z | A) = K k=1 P (z * ,k | v k , h k , u k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "The generation of Z is followed by that of the corresponding weight matrix W \u2208 R K\u00d7M , and then we obtain the feature score matrix\u0398 = ZW . Each item of W , w k,m , is generated from Student's t-distribution with 1 degree of freedom. We choose this distribution for two reasons. First, it has heavier tails than the Gaussian distribution and allows some weights to fall far from 0. Second, our inference algorithm demands that the negative logarithm of the probability density function be differentiable (see Section S.4 for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "The t-distribution satisfies the condition while the Laplace distribution does not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "Finally, X is generated using\u0398 = ZW , as described in Section 4.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "P (X | Z, W ) = L l=1 N i=1 P (x l,i | z l, * , W ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Model",
"sec_num": "4.3"
},
{
"text": "As usual, we use Gibbs sampling to perform posterior inference. Given observed values x l,i , we iteratively update z l,k , v k , h k , u k , and w k, * as well as missing values x l,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.4"
},
{
"text": "Update x l,i . x l,i is sampled from Eq. (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.4"
},
{
"text": "z l,k . The posterior probability P (z l,k | \u2212) is proportional to Eq. (5) times the product of Eq. (3) for all feature i's of language l.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update",
"sec_num": null
},
{
"text": "Update v k , h k and u k . We want to sample v k (and h k and u k ) from P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update",
"sec_num": null
},
{
"text": "(v k | \u2212) \u221d P (v k )P (z * ,k | v k , h k , u k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update",
"sec_num": null
},
{
"text": ". This belongs to a class of problems known as sampling from doubly-intractable distributions (M\u00f8ller et al., 2006; Murray et al., 2006) . While it remains a challenging problem in statistics, it is not difficult to approximately sample the variables if we give up theoretical rigorousness (Liang, 2010) . The details of the algorithm we use can be found in Section S.3 of the supplementary material.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(M\u00f8ller et al., 2006;",
"ref_id": "BIBREF21"
},
{
"start": 116,
"end": 136,
"text": "Murray et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 290,
"end": 303,
"text": "(Liang, 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Update",
"sec_num": null
},
{
"text": "Update w k, * . The remaining problem is how to update w k,m . Since the number of weights is very large (K \u00d7 M ), the simple Metropolis-Hastings algorithm (G\u00f6r\u00fcr et al., 2006; Doyle et al., 2014) is not a workable option. To address this problem, we block-sample w k, * = (w k,1 , \u2022 \u2022 \u2022 , w k,M ) using Hamiltonian Monte Carlo (HMC) (Neal, 2011). A sketch of the algorithm can be found in Section S.4 of the supplementary material.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(G\u00f6r\u00fcr et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 177,
"end": 196,
"text": "Doyle et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Update",
"sec_num": null
},
{
"text": "We indirectly evaluated the proposed model, called SYNDIA, by means of missing value imputation. If it predicts missing feature values better than reasonable baselines, we can say that the induced parameters are justified. Although no ground truth exists for the missing portion of the dataset, missing value imputation can be evaluated by hiding some observed values and verifying the effectiveness of their recovery. We conducted a 10-fold cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "We ran SYNDIA with two different settings: K = 50 and 100. We performed posterior inference for 500 iterations. After that, we collected 100 samples of x l,i for each language, one per iteration. For each missing value x l,i , we output the most frequent value among the 100 samples. The HMC parameters and S were set to 0.05 and 10, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "We applied simulated annealing to the sampling of z l,k . For the first 100 iterations, the inverse temperature was increased from 0.1 to 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "We compared SYNDIA with several baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "MFV For each categorical feature i, always output the most frequent value among observed x l,i . Surface-DIA An autologistic model applied to surface features (Yamauchi and Murawaki, 2016) . The details of the model are presented in Section S.5 of the supplementary material.",
"cite_spans": [
{
"start": 159,
"end": 188,
"text": "(Yamauchi and Murawaki, 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "DPMPM A Dirichlet process mixture of multinomial distributions with a truncated stick-breaking construction (Si and Reiter, 2013) used by Blasi et al. (2017) . It assigns a single categorical latent variable to each language. As an implementation, we used the R package NPBayesImpute.",
"cite_spans": [
{
"start": 108,
"end": 115,
"text": "(Si and",
"ref_id": "BIBREF31"
},
{
"start": 116,
"end": 157,
"text": "Reiter, 2013) used by Blasi et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "MCA A variant of multiple correspondence analysis (Josse et al., 2012) used by Murawaki (2015) . We used the imputeMCA function of the R package missMDA.",
"cite_spans": [
{
"start": 50,
"end": 70,
"text": "(Josse et al., 2012)",
"ref_id": "BIBREF19"
},
{
"start": 79,
"end": 94,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "SYN A simplified version of SYNDIA, with v k and h k removed from the model. See Section S.6 of the supplementary material for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "MFV and Surface-DIA can be seen as the models of inter-language dependencies while DPMPM, MCA and SYN are these of inter-feature dependencies. Table 2 shows the result. We can see that SYNDIA with K = 50 performed the best. -vertical -horizontal (SYN) 73.83% Table 3 : Ablation experiments for missing value imputation. K = 50.",
"cite_spans": [
{
"start": 224,
"end": 251,
"text": "-vertical -horizontal (SYN)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "Smaller K yielded higher accuracy although the likelihood P (X | Z, W ) went up as K increased. Due to the high ratio of missing values, the model might have overfitted the data with larger K. The fact that SYN outperformed Surface-DIA suggests that inter-feature dependencies have more predictive power than inter-language dependencies in the dataset. However, they are complimentary in nature as SYNDIA outperformed SYN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "We can confirm the limited expressive power of single categorical latent variables because DPMPM performed poorly even if a small value was set to the truncation level K * to avoid overfitting. MCA employs more expressive representations of a sequence of continuous variables for each language. It slightly outperformed DPMPM but was beaten by SYN by a large margin. We conjecture that MCA was more sensitive to initialization than the Bayesian model armed with MCMC sampling. In any case, this result indicates that the latent representations Murawaki (2015) obtained were of poorer quality than those of SYN, not to mention those of SYNDIA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "We also conducted ablation experiments by removing either v k or h k from the model. The result is shown in Table 3 . It turned out that the horizontal factor had stronger predictive power than the vertical factor, which has a negative implication on typology-based phylogenetic inference. ",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Missing Value Imputation",
"sec_num": "5.1"
},
{
"text": "Hereafter we use all observed features to perform posterior inference. We examined how vertically stable and horizontally diffusible the induced parameters were. For SYNDIA, we simply extracted v k and h k from posterior samples. For comparison, we used Surface-DIA to estimate vertical stability and horizontal diffusibility of surface features. The same autologistic model was used to estimate v k and h k of SYN after the posterior inference. For details, see Sections S.5 and S.6.2 of the supplementary material. Figure 4 summarizes the results. We can see that the most vertically stable latent parameters of SYNDIA are comparable to the most vertically stable surface features. The same holds for the most horizontally diffusible ones. Thus we can conclude that the induced representations retain ver-tical and horizontal signals observed for surface features.",
"cite_spans": [],
"ref_spans": [
{
"start": 517,
"end": 525,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Vertical and Horizontal Signals",
"sec_num": "5.2"
},
{
"text": "On the other hand, SYN halved vertical stability and horizontal diffusibility when transforming surface features into latent parameters. A plausible explanation of this failure is that for many scarcely documented languages, we simply did not have enough observed surface features to determine their latent representations only from inter-feature dependencies. Due to the inherent uncertainty, z l,k swung between 0 and 1 during posterior inference, regardless of the states of their neighbors. As a result, these languages seem to have blocked vertical and horizontal signals. By contrast, SYNDIA appears to have flipped z l,k without disrupting inter-language dependencies when there were.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vertical and Horizontal Signals",
"sec_num": "5.2"
},
{
"text": "In summary, the experimental results have neg- ative implications for the pipeline approach pursued by Murawaki (2015) , where the inter-feature dependency-based induction of latent representations is followed by phylogenetic inference. Fortunately, evidence presented up to this point suggests that it can be readily replaced with the proposed model. Figure 5 compares a latent parameter of SYNDIA with a surface feature on the world map. Some surface features show several geographic clusters of large size, telling something about the evolutionary history of languages. Even with a large number of missing values, SYNDIA yielded comparable geographic clusters for some parameters. Some geographic clusters were also produced by SYN, especially when the estimation of z l,k was stable. In our subjective evaluation, SYNDIA appeared to show clearer patterns than SYN. Needless to say, not all surface features were associated with clear geographic patterns, and not all latent parameters were. Overall, the results shed a positive light on the applicability of the induced representations to phylogenetic inference.",
"cite_spans": [
{
"start": 103,
"end": 118,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Vertical and Horizontal Signals",
"sec_num": "5.2"
},
{
"text": "We also checked the weight matrix W (Fig-ure S. 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 47,
"text": "(Fig-ure S.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "It is not easy to analyze qualitatively but it deserves future investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "In this paper, we presented a Bayesian model that induces binary latent parameters from surface features of linguistic typology. We combined inter-language dependencies with inter-feature dependencies to obtain the latent representations of better quality. Gathering various statistical techniques, we managed to create the complex but workable model. The source code is publicly available at https://github.com/ murawaki/latent-typology. We pointed out that typology-based phylogenetic inference proposed by Murawaki (2015) had weak foundations, and we rebuilt them from scratch. The whole long paper was needed to do so, but our ultimate goal is the same as the one stated by Murawaki (2015) . In the future, we would like to utilize the new latent representations to uncover the evolutionary history of languages.",
"cite_spans": [
{
"start": 509,
"end": 524,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
},
{
"start": 678,
"end": 693,
"text": "Murawaki (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "By lexicon-based historical-comparative linguistics, we mean broad topics including sound laws, cognates, and historical changes in inflectional paradigms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://wals.info/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although the natural choice for modeling binary latent matrices is an Indian buffet process (IBP) (Griffiths and Ghahramani, 2011), we do not take this approach for reasons we explain in Section S.1 of the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the experiments, we set shape \u03ba = 1, scale \u03b8 = 1, and standard deviation \u03c3 = 10. These priors were not noninformative, but they were sufficiently gentle in the regions where these parameters typically resided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partly supported by JSPS KAKENHI Grant Number 26730122.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synchronic versus diachronic explanation and the nature of the language faculty",
"authors": [
{
"first": "R",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2016,
"venue": "Annual Review of Linguistics",
"volume": "2",
"issue": "",
"pages": "1--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen R. Anderson. 2016. Synchronic versus di- achronic explanation and the nature of the language faculty. Annual Review of Linguistics, 2:1-425.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Atoms of Language: The Mind's Hidden Rules of Grammar",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark C. Baker. 2002. The Atoms of Language: The Mind's Hidden Rules of Grammar. Basic Books.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spatial interaction and the statistical analysis of lattice systems",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Besag",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "",
"issue": "",
"pages": "192--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Besag. 1974. Spatial interaction and the statisti- cal analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), pages 192-236.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Grammars are robustly transmitted even during the emergence of creole languages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Dami\u00e1n",
"suffix": ""
},
{
"first": "Susanne",
"middle": [
"Maria"
],
"last": "Blasi",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Michaelis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haspelmath",
"suffix": ""
}
],
"year": 2017,
"venue": "Nature Human Behaviour",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dami\u00e1n E. Blasi, Susanne Maria Michaelis, and Martin Haspelmath. 2017. Grammars are robustly transmit- ted even during the emergence of creole languages. Nature Human Behaviour.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What principles and parameters got wrong",
"authors": [
{
"first": "Cedric",
"middle": [],
"last": "Boeckx",
"suffix": ""
}
],
"year": 2014,
"venue": "Treebanks: Building and Using Parsed Corpora",
"volume": "",
"issue": "",
"pages": "155--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cedric Boeckx. 2014. What principles and parame- ters got wrong. In M. Carme Picallo, editor, Tree- banks: Building and Using Parsed Corpora, pages 155-178. Oxford University Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mapping the origins and expansion of the Indo-European language family",
"authors": [
{
"first": "Remco",
"middle": [],
"last": "Bouckaert",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Lemey",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Simon",
"middle": [
"J"
],
"last": "Greenhill",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"V"
],
"last": "Alekseyenko",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"J"
],
"last": "Drummond",
"suffix": ""
},
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Marc",
"middle": [
"A"
],
"last": "Suchard",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2012,
"venue": "Science",
"volume": "337",
"issue": "6097",
"pages": "957--960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Remco Bouckaert, Philippe Lemey, Michael Dunn, Simon J. Greenhill, Alexander V. Alekseyenko, Alexei J. Drummond, Russell D. Gray, Marc A. Suchard, and Quentin D. Atkinson. 2012. Mapping the origins and expansion of the Indo-European lan- guage family. Science, 337(6097):957-960.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Areal linguistics",
"authors": [
{
"first": "Lyle",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2006,
"venue": "Encyclopedia of Language and Linguistics",
"volume": "",
"issue": "",
"pages": "454--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lyle Campbell. 2006. Areal linguistics. In Encyclo- pedia of Language and Linguistics, Second Edition, pages 454-460. Elsevier.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The theory of principles and parameters",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Lasnik",
"suffix": ""
}
],
"year": 1993,
"venue": "Joachim Jacobs, Arnim von Stechow, Wolfgang Sternefeld, and Theo Vennemann",
"volume": "1",
"issue": "",
"pages": "506--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Howard Lasnik. 1993. The the- ory of principles and parameters. In Joachim Ja- cobs, Arnim von Stechow, Wolfgang Sternefeld, and Theo Vennemann, editors, Syntax: An International Handbook of Contemporary Research, volume 1, pages 506-569. De Gruyter.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Non-parametric Bayesian areal linguistics",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "593--601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2009. Non-parametric Bayesian areal linguistics. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 593-601.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Bayesian model for discovering typological implications",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Lyle Campbell. 2007. A Bayesian model for discovering typological implications. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 65-72.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nonparametric learning of phonological constraints in optimality theory",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Doyle",
"suffix": ""
},
{
"first": "Klinton",
"middle": [],
"last": "Bicknell",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1094--1103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Doyle, Klinton Bicknell, and Roger Levy. 2014. Nonparametric learning of phonological con- straints in optimality theory. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1094-1103.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A choice model with infinitely many latent features",
"authors": [
{
"first": "Dilan",
"middle": [],
"last": "G\u00f6r\u00fcr",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "J\u00e4kel",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "361--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilan G\u00f6r\u00fcr, Frank J\u00e4kel, and Carl Edward Rasmussen. 2006. A choice model with infinitely many latent features. In Proceedings of the 23rd International Conference on Machine Learning, pages 361-368.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Language-tree divergence times support the Anatolian theory of Indo-European origin",
"authors": [
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2003,
"venue": "Nature",
"volume": "426",
"issue": "6965",
"pages": "435--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Ana- tolian theory of Indo-European origin. Nature, 426(6965):435-439.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universals of language",
"authors": [
{
"first": "Joseph",
"middle": [
"H"
],
"last": "Greenberg",
"suffix": ""
}
],
"year": 1963,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph H. Greenberg, editor. 1963. Universals of lan- guage. MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Indian buffet process: An introduction and review",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "1185--1224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Zoubin Ghahramani. 2011. The Indian buffet process: An introduction and review. Journal of Machine Learning Research, 12:1185-1224.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The World Atlas of Language Structures",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Dryer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gil",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Comrie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Haspelmath, Matthew Dryer, David Gil, and Bernard Comrie, editors. 2005. The World Atlas of Language Structures. Oxford University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Genesis of Grammar: A Reconstruction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Heine",
"suffix": ""
},
{
"first": "Tania",
"middle": [],
"last": "Kuteva",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Heine and Tania Kuteva. 2007. The Genesis of Grammar: A Reconstruction. Oxford University Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Training products of experts by minimizing contrastive divergence",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Computation",
"volume": "14",
"issue": "8",
"pages": "1771--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Com- putation, 14(8):1771-1800.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The ising model for changes in word ordering rules in natural languages",
"authors": [
{
"first": "Yoshiaki",
"middle": [],
"last": "Itoh",
"suffix": ""
},
{
"first": "Sumie",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2004,
"venue": "Physica D: Nonlinear Phenomena",
"volume": "198",
"issue": "3",
"pages": "333--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshiaki Itoh and Sumie Ueda. 2004. The ising model for changes in word ordering rules in natu- ral languages. Physica D: Nonlinear Phenomena, 198(3):333-339.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Handling missing values with regularized iterative multiple correspondence analysis",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Josse",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Chavent",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Classification",
"volume": "29",
"issue": "1",
"pages": "91--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Josse, Marie Chavent, Benot Liquet, and Fran\u00e7ois Husson. 2012. Handling missing values with reg- ularized iterative multiple correspondence analysis. Journal of Classification, 29(1):91-116.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A double Metropolis-Hastings sampler for spatial models with intractable normalizing constants",
"authors": [
{
"first": "Faming",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Statistical Computation and Simulation",
"volume": "80",
"issue": "9",
"pages": "1007--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faming Liang. 2010. A double Metropolis-Hastings sampler for spatial models with intractable normal- izing constants. Journal of Statistical Computation and Simulation, 80(9):1007-1022.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants",
"authors": [
{
"first": "Jesper",
"middle": [],
"last": "M\u00f8ller",
"suffix": ""
},
{
"first": "Anthony",
"middle": [
"N"
],
"last": "Pettitt",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "Kasper",
"middle": [
"K"
],
"last": "Berthelsen",
"suffix": ""
}
],
"year": 2006,
"venue": "Biometrika",
"volume": "93",
"issue": "2",
"pages": "451--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesper M\u00f8ller, Anthony N. Pettitt, R. Reeves, and Kasper K. Berthelsen. 2006. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants. Biometrika, 93(2):451-458.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Continuous space representations of linguistic typology and their application to phylogenetic inference",
"authors": [
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "324--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yugo Murawaki. 2015. Continuous space representa- tions of linguistic typology and their application to phylogenetic inference. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 324-334.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Statistical modeling of creole genesis",
"authors": [
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yugo Murawaki. 2016. Statistical modeling of creole genesis. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "MCMC for doubly-intractable distributions",
"authors": [
{
"first": "Iain",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "David",
"middle": [
"J C"
],
"last": "Mackay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "359--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iain Murray, Zoubin Ghahramani, and David J. C. MacKay. 2006. MCMC for doubly-intractable dis- tributions. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, pages 359-366.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "MCMC using Hamiltonian dynamics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neal",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "113--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radford M. Neal. 2011. MCMC using Hamilto- nian dynamics. In Steve Brooks, Andrew Gelman, Galin L. Jones, and Xiao-Li Meng, editors, Hand- book of Markov Chain Monte Carlo, pages 113-162. CRC Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Networks uncover hidden lexical borrowing in Indo-European language evolution",
"authors": [
{
"first": "Shijulal",
"middle": [],
"last": "Nelson-Sathi",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Geisler",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Fangerau",
"suffix": ""
},
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Royal Society B: Biological Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shijulal Nelson-Sathi, Johann-Mattis List, Hans Geisler, Heiner Fangerau, Russell D. Gray, William Martin, and Tal Dagan. 2010. Networks uncover hidden lexical borrowing in Indo-European lan- guage evolution. Proceedings of the Royal Society B: Biological Sciences.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Linguistic Diversity in Space and Time",
"authors": [
{
"first": "Johanna",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna Nichols. 1992. Linguistic Diversity in Space and Time. University of Chicago Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The spread of language around the Pacific rim",
"authors": [
{
"first": "Johanna",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 1994,
"venue": "Evolutionary Anthropology: Issues, News, and Reviews",
"volume": "3",
"issue": "6",
"pages": "206--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna Nichols. 1994. The spread of language around the Pacific rim. Evolutionary Anthropology: Issues, News, and Reviews, 3(6):206-215.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Diachronically stable structural features",
"authors": [
{
"first": "Johanna",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 1993,
"venue": "Historical Linguistics 1993. Selected Papers from the 11th International Conference on Historical Linguistics, Los Angeles 16-20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna Nichols. 1995. Diachronically stable struc- tural features. In Henning Andersen, editor, Histor- ical Linguistics 1993. Selected Papers from the 11th International Conference on Historical Linguistics, Los Angeles 16-20 August 1993. John Benjamins Publishing Company.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Which parts of language are the most stable?",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Parkvall",
"suffix": ""
}
],
"year": 2008,
"venue": "STUF-Language Typology and Universals Sprachtypologie und Universalienforschung",
"volume": "61",
"issue": "",
"pages": "234--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Parkvall. 2008. Which parts of language are the most stable? STUF-Language Typology and Uni- versals Sprachtypologie und Universalienforschung, 61(3):234-250.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Nonparametric Bayesian multiple imputation for incomplete categorical variables in large-scale assessment surveys",
"authors": [
{
"first": "Yajuan",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Jerome",
"middle": [
"P"
],
"last": "Reiter",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Educational and Behavioral Statistics",
"volume": "38",
"issue": "5",
"pages": "499--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yajuan Si and Jerome P. Reiter. 2013. Nonparametric Bayesian multiple imputation for incomplete cate- gorical variables in large-scale assessment surveys. Journal of Educational and Behavioral Statistics, 38(5):499-521.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Maximum-margin matrix factorization",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D M"
],
"last": "Rennie",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 17th International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1329--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Srebro, Jason D. M. Rennie, and Tommi S. Jaakkola. 2005. Maximum-margin matrix factoriza- tion. In Proceedings of the 17th International Con- ference on Neural Information Processing Systems, pages 1329-1336.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Discriminative analysis of linguistic features for typological study",
"authors": [
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Kawasaki",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroya Takamura, Ryo Nagata, and Yoshifumi Kawasaki. 2016. Discriminative analysis of linguis- tic features for typological study. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC 2016), pages 69-76.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Cultural macroevolution on neighbor graphs: Vertical and horizontal transmission among western north American Indian societies",
"authors": [
{
"first": "Mary",
"middle": [
"C"
],
"last": "Towner",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"N"
],
"last": "Grote",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Venti",
"suffix": ""
},
{
"first": "Monique",
"middle": [
"Borgerhoff"
],
"last": "Mulder",
"suffix": ""
}
],
"year": 2012,
"venue": "Human Nature",
"volume": "23",
"issue": "3",
"pages": "283--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary C. Towner, Mark N. Grote, Jay Venti, and Monique Borgerhoff Mulder. 2012. Cultural macroevolution on neighbor graphs: Vertical and horizontal transmission among western north Amer- ican Indian societies. Human Nature, 23(3):283- 305.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Proposition 16",
"authors": [
{
"first": "Nikolai",
"middle": [],
"last": "Sergeevich",
"suffix": ""
},
{
"first": "Trubetzkoy",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1928,
"venue": "Acts of the First International Congress of Linguists",
"volume": "",
"issue": "",
"pages": "17--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolai Sergeevich Trubetzkoy. 1928. Proposition 16. In Acts of the First International Congress of Lin- guists, pages 17-18.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Temporal Stability of Linguistic Typological Features",
"authors": [
{
"first": "S\u00f8ren",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"W"
],
"last": "Holman",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8ren Wichmann and Eric W. Holman. 2009. Temporal Stability of Linguistic Typological Features. Lincom Europa.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Contrasting vertical and horizontal transmission of typological features",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamauchi",
"suffix": ""
},
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "836--846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamauchi and Yugo Murawaki. 2016. Contrast- ing vertical and horizontal transmission of typolog- ical features. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 836-846.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Overview of the proposed Bayesian generative model. Dotted boxes indicate the latent and surface representations of a language. Solid arrows show the direction of stochastic generation."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Neighbor graphs and counting functions used to encode inter-language dependencies."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Scatter plots of surface features and induced parameters, with vertical stability v i (v k ) as the y-axis and horizontal diffusibility h i (h k ) as the x-axis. Larger v i (h i ) indicates that feature i is more stable (diffusible). Comparing the absolute values of a v i and an h i makes no sense because they are tied with different neighbor graphs. Features are classified into 9 broad categories (called Area in WALS). v k (and h k ) is the geometric mean of the 100 samples. The induction models are SYNDIA (Top) and SYN (Bottom). For both models, K = 50."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A comparison of a surface feature and a latent parameter in terms of geographical distribution. Each point denotes a language. (Top) Feature 97A, \"Relationship between the Order of Object and Verb and the Order of Adjective and Noun.\" Missing values are denoted as N/A. (Bottom) A parameter of SYNDIA with K 0 = 50. Lighter nodes indicate higher frequencies of z l,k = 1 among 100 samples."
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Notations."
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": ""
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">: Accuracy of missing value imputation.</td></tr><tr><td colspan=\"2\">The first column indicates the types of dependen-</td></tr><tr><td colspan=\"2\">cies the models exploit: inter-language dependen-</td></tr><tr><td colspan=\"2\">cies, inter-feature dependencies and both.</td></tr><tr><td>Model</td><td>Accuracy</td></tr><tr><td>Full model (SYNDIA)</td><td>74.46%</td></tr><tr><td>-vertical</td><td>73.89%</td></tr><tr><td>-horizontal</td><td>74.47%</td></tr></table>",
"html": null,
"text": ""
}
}
}
}