Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
70.9 kB
{
"paper_id": "W12-0209",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:15:22.275189Z"
},
"title": "Language comparison through sparse multilingual word alignment",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mayer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Unit Quantitative Language Comparison LMU Munich",
"location": {}
},
"email": "thommy.mayer@googlemail.com"
},
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Marburg",
"location": {}
},
"email": "cysouw@uni-marburg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a novel approach to compare languages on the basis of parallel texts. Instead of using word lists or abstract grammatical characteristics to infer (phylogenetic) relationships, we use multilingual alignments of words in sentences to establish measures of language similarity. To this end, we introduce a new method to quickly infer a multilingual alignment of words, using the co-occurrence of words in a massively parallel text (MPT) to simultaneously align a large number of languages. The idea is that a simultaneous multilingual alignment yields a more adequate clustering of words across different languages than the successive analysis of bilingual alignments. Since the method is computationally demanding for a larger number of languages, we reformulate the problem using sparse matrix calculations. The usefulness of the approach is tested on an MPT that has been extracted from pamphlets of the Jehova's Witnesses. Our preliminary experiments show that this approach can supplement both the historical and the typological comparison of languages.",
"pdf_parse": {
"paper_id": "W12-0209",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a novel approach to compare languages on the basis of parallel texts. Instead of using word lists or abstract grammatical characteristics to infer (phylogenetic) relationships, we use multilingual alignments of words in sentences to establish measures of language similarity. To this end, we introduce a new method to quickly infer a multilingual alignment of words, using the co-occurrence of words in a massively parallel text (MPT) to simultaneously align a large number of languages. The idea is that a simultaneous multilingual alignment yields a more adequate clustering of words across different languages than the successive analysis of bilingual alignments. Since the method is computationally demanding for a larger number of languages, we reformulate the problem using sparse matrix calculations. The usefulness of the approach is tested on an MPT that has been extracted from pamphlets of the Jehova's Witnesses. Our preliminary experiments show that this approach can supplement both the historical and the typological comparison of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The application of quantitative methods in historical linguistics has attracted a lot of attention in recent years (cf. Steiner et al. (2011) for a survey). Many ideas have been adapted from evolutionary biology and bioinformatics, where similar problems occur with respect to the genealogical grouping of species and the multiple alignment of strings/sequences. One of the main differences between those areas and attempts to uncover language history is the limited amount of suitable data that can serve as the basis for language comparison. A widely used resource are Swadesh lists or similar collections of translational equivalents in the form of word lists. Likewise, phylogenetic methods have been applied using structural characteristics (e.g., Dunn et al. (2005) ). In this paper, we propose yet another data source, namely parallel texts.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "Steiner et al. (2011)",
"ref_id": "BIBREF16"
},
{
"start": 753,
"end": 771,
"text": "Dunn et al. (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many analogies have been drawn between the evolution of species and languages (see, for instance, Pagel (2009) for such a comparison). One of the central problems is to establish what is the equivalent of the gene in the reproduction of languages. Like in evolutionary biology, where gene sequences in organisms are compared to infer phylogenetic trees, a comparison of the \"genes\" of language would be most appropriate for a quantitative analysis of languages. Yet, Swadeshlike wordlists or structural characteristics do not neatly fit into this scheme as they are most likely not the basis on which languages are replicated. After all, language is passed on as the expression of propositions, i.e. sentences, which usually consists of more than single words. Hence, following Croft (2000) , we assume that the basic unit of replication is a linguistic structure embodied in a concrete utterance.",
"cite_spans": [
{
"start": 98,
"end": 110,
"text": "Pagel (2009)",
"ref_id": "BIBREF12"
},
{
"start": 778,
"end": 790,
"text": "Croft (2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to this view, strings of DNA in biological evolution correspond to utterances in language evolution. Accordingly, genes (i.e., the functional elements of a string of DNA) correspond to linguistic structures occurring in those utterances. Linguistic replicators (the \"genes\" of language) are thus structures in the context of an utterance. Such replicators are not only the words as parts of the sentence but also constructions to express a complex semantic structure, or phonetic realizations of a phoneme, to give just a few examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we want to propose an approach that we consider to be a first step in the direction of using the structure of utterances as the basic unit for the comparison of languages. For this purpose, a multilingual alignment of words in parallel sentences (as the equivalent of utterances in parallel texts) is computed, similar to multispecies alignments of DNA sequences. 1 These alignments are clusters of words from different languages in the parallel translations of the same sentence. 2 The remainder of the paper is organized as follows. First, we quickly review the position of our approach in relation to the large body of work on parallel text analysis (Section 2). Then we describe the method for the multilingual alignment of words (Section 3). Since the number of languages and sentences that have to be analyzed require a lot of computationally expensive calculations of co-occurrence counts, the whole analysis is reformulated into manipulations of sparse matrices. The various steps are presented in detail to give a better overview of the calculations that are needed to infer the similarities. Subsequently, we give a short description of the material that we used in order to test our method (Section 4). In Section 5 we report on some of the experiments that we carried out, followed by a discussion of the results and their implications. Finally, we conclude with directions for future work in this area.",
"cite_spans": [
{
"start": 496,
"end": 497,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alignment of words using parallel texts has been widely applied in the field of statistical machine translation (cf. Koehn (2010) ). Alignment methods have largely been employed for bitexts, i.e., parallel texts of two languages (Tiedemann, 2011) . In a multilingual context, the same methods could in principle be used for each pair of languages in the sample. One of the goals of this pa- 1 The choice of translational equivalents in the form of sentences rather than words accounts for the fact that some words cannot be translated accurately between some languages whereas most sentences can.",
"cite_spans": [
{
"start": 117,
"end": 129,
"text": "Koehn (2010)",
"ref_id": "BIBREF10"
},
{
"start": 229,
"end": 246,
"text": "(Tiedemann, 2011)",
"ref_id": "BIBREF17"
},
{
"start": 391,
"end": 392,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "2 In practice, we simply use wordforms as separated by spaces or punctuation instead of any more linguistically sensible notion of 'word'. For better performance, more detailed language-specific analysis is necessary, like morpheme separation, or the recognition of multi-word expressions and phrase structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "per, however, is to investigate what can be gained when including additional languages in the alignment process at the same time and not iteratively looking for correspondences in pairs of languages (see Simard (1999) , Simard (2000) for a similar approach).",
"cite_spans": [
{
"start": 204,
"end": 217,
"text": "Simard (1999)",
"ref_id": "BIBREF14"
},
{
"start": 220,
"end": 233,
"text": "Simard (2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "There are basically two approaches to computing word alignments as discussed in the literature (cf. Och and Ney (2003) ): (i) statistical alignment models and (ii) heuristic models. The former have traditionally been used for the training of parameters in statistical machine translation and are characterized by their high complexity, which makes them difficult to implement and tune. The latter are considerably simpler and thus easier to implement as they only require a function for the association of words, which is computed from their co-occurrence counts. A wide variety of cooccurrence measures have been employed in the literature. We decided to use a heuristic method for the first steps reported on here, but plan to integrate statistical alignment models for future work.",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "(cf. Och and Ney (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "Using a global co-occurrence measure, we pursue an approach in which the words are compared for each sentence individually, but for all languages at the same time. That is, a co-occurrence matrix is created for each sentence, containing all the words of all languages that occur in the corresponding translational equivalents for that sentence. This matrix then serves as the input for a partitioning algorithm whose results are interpreted as a partial alignment of the sentence. In most cases, the resulting alignments do not include words from all languages. Only those words that are close translational equivalents occur in alignments. This behavior, while not optimal for machine translation, is highly useful for language comparison because differences between languages are implicitly marked as such by splitting different structures into separate alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "The languages are then compared on the basis of having words in the same clusters with other languages. The more word forms they share in the same clusters, the more similar the languages are considered to be. 3 The form of the words themselves is thereby of no importance. What counts is their frequency of co-occurrence in alignments across languages. This is in stark contrast to methods which focus on the form of words with similar meanings (e.g., using Swadesh lists) in order to compute some kind of language similarity. One major disadvantage of the present approach for a comparison of languages from a historical perspective is the fact that such similarities also could be a consequence of language contact. This is a side effect that is shared by the word list approach, in which loanwords have a similar effect on the results. It has to be seen how strongly this influences the final results in order to assess whether our current approach is useful for the quantitative analysis of genealogical relatedness.",
"cite_spans": [
{
"start": 210,
"end": 211,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2"
},
{
"text": "We start from a massively parallel text, which we consider as an n\u00d7m matrix consisting of n different parallel sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "S = {S 1 , S 2 , S 3 , ..., S n } in m different languages L = {L 1 , L 2 , L 3 , ..., L m }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "This data-matrix is called SL ('sentences \u00d7 languages'). We assume here that the parallel sentences are short enough so that most words occur only once per sentence. Because of this assumption we can ignore the problem of decoding the correct alignment of multiple occurring words, a problem we leave to be tackled in future research. We also ignore the complications of languagespecific chunking and simply take spaces and punctuation marks to provide a word-based separation of the sentences into parts. In future research we are planning to include the (languagespecific) recognition of bound morphemes, multiword expressions and phrase structures to allow for more precise cross-language alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Based on these assumptions, we decompose the SL matrix into two sparse matrices WS ('words \u00d7 sentences') and WL ('words \u00d7 languages') based on all words w that occur across all languages in the parallel texts. We define them as follows. First, WS ij = 1 when word w i occurs in sentence S j , and is 0 elsewhere. Second, WL ij = 1 when word w i is a word of language L j , and is 0 elsewhere. The product WS T \u2022 WL then results in a matrix of the same size as SL, listing in each cell the number of different words in each sentence. Instead of the current approach of using WS only for marking the occurrence of a word in a sentence (i.e., a 'bag of words' ap-proach), it is also possible to include the order of words in the sentences by defining WS ij = k when word w i occurs in position k in sentence S j . We will not use this extension in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The matrix WS will be used to compute cooccurrence statistics of all pairs of words, both within and across languages. Basically, we define O ('observed co-occurrences') and E ('expected co-occurrences') as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "O = WS \u2022 WS T E = WS \u2022 1 SS n \u2022 WS T E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "ij thereby gives the expected number of sentences where w i and w j occur in the corresponding translational equivalents, on the assumption that words from different languages are statistically independent of each other and occur at random in the translational equivalents. Note that the symbol '1 ab ' in our matrix multiplications refers to a matrix of size a \u00d7 b consisting of only 1's. Widespread co-occurrence measures are pointwise mutual information, which under these definitions simply is log E \u2212 log O, or the cosine similarity, which would be O \u221a n\u2022E . However, we assume that the co-occurrence of words follow a poisson process (Quasthoff and Wolff, 2002) , which leads us to define the co-occurrence matrix WW ('words \u00d7 words') using a poisson distribution as:",
"cite_spans": [
{
"start": 640,
"end": 667,
"text": "(Quasthoff and Wolff, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "WW = \u2212 log[ E O exp(\u2212E) O! ] = E + log O! \u2212 O log E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "This WW matrix represents a similarity matrix of words based on their co-occurrence in translational equivalents for the respective language pair. Using the alignment clustering that is based on the WW matrices for each sentence, we then decompose the words-by-sentences matrix WS into two sparse matrices WA ('words \u00d7 alignments') and AS ('alignments \u00d7 sentences') such that WS = WA \u2022 AS. This decomposition is the basic innovation of the current paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The idea is to compute concrete alignments from the statistical alignments in WW for each sentence separately, but for all languages at the same time. For each sentence S i we take the subset of the similarity matrix WW only including those words that occur in the column WS i , i.e., only those words that occur in sentence S i . We then perform a partitioning on this subset of the similarity matrix WW. In this paper we use the affinity propagation clustering approach from Frey and Dueck (2007) to identify the clusters, but this is mainly a practical choice and other methods could be used here as well. The reason for this choice is that this clustering does not require a pre-defined number of clusters, but establishes the optimal number of clusters together with the clustering itself. 4 In addition, it yields an exemplar for each cluster, which is the most typical member of the cluster. This enables an inspection of intermediate results of what the clusters actually contain. The resulting clustering for each sentence identifies groups of words that are similar to each other, which represent words that are to be aligned across languages. Note that we do not force such clusters to include words from all languages, nor do we force any restrictions on the number of words per language in each cluster. 5 In practice, most alignments only include words from a small number of the languages included.",
"cite_spans": [
{
"start": 477,
"end": 498,
"text": "Frey and Dueck (2007)",
"ref_id": "BIBREF5"
},
{
"start": 795,
"end": 796,
"text": "4",
"ref_id": null
},
{
"start": 1317,
"end": 1318,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "To give a concrete example for the clustering results, consider the English sentence given below (no. 93 in our corpus, see next section) together with its translational equivalents in German, Bulgarian, Spanish, Maltese and Ewe (without punctuation and capitalization).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "i. who will rule with jesus (English, en) ii. wer wird mit jesus regieren (German, de)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "iii. ko$ i we upravlva s isus (Bulgarian, bl) iv. qui\u00e9nes gobernar\u00e1n con jes\u00fas (Spanish, es) v. min se jahkem ma\u0121es\u00f9 (Maltese, mt) vi. amekawoe a\u00e3u fia kple yesu (Ewe, ew)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "These six languages are only a subset of the 50 languages that served as input for the matrix WW where all words that occur in the respective sentence for all 50 languages are listed together with their co-occurrence significance. When restricting the output of the clustering to those words that occur in the six languages given above, 4 Instead of a prespecified number of clusters, affinity propagation in fact takes a real number as input for each data point where data points with larger values are more likely to be chosen as exemplars. If no input preference is given for each data point, as we did in our experiments, exemplar preferences are initialized as the median of non infinity values in the input matrix.",
"cite_spans": [
{
"start": 337,
"end": 338,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "5 Again, this takes into account that some words cannot be translated accurately between some languages. however, the following clustering result is obtained:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "1. isus bl jesus en fia ew yesu ew\u0121 es\u00f9 mt jes\u00fas es jesus de 2. ko$ i bl who en min mt wer de 3. regieren de 4. upravlva bl a\u00e3u ew jahkem mt gobernar\u00e1n es 5. amekawoe ew qui\u00e9nes es 6. we bl will en se mt wird de 7. s bl with en con es mit de 8. kple ew 9. ma mt 10. rule en First note that the algorithm does not require all languages to be given in the same script. Bulgarian isus is grouped together with its translational equivalents in cluster 1 even though it does not share any grapheme with them. Rather, words from different languages end up in the same cluster if they behave similarly across languages in terms of their co-occurrence frequency. Further, note that the \"question word\" clusters 2 and 5 differ in their behavior as will be discussed in more detail in Section 5.2. Also note that the English \"rule\" and German \"regieren\" are not included in the cluster 4 with similar translations in the other languages. This turns out to be a side effect of the very low frequency of these words in the current corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In the following, we will refer to these clusters of words as alignments (many-to-many mappings between words) within the same sentence across languages. For instance, sentences i., iii. and v. above would have the following alignment, where indices mark those words that are aligned by the alignment clusters (1.-10.) above: who 2 will 6 rule 10 with 7 jesus 1 min 2 se 6 jahkem 4 ma 7\u0121 es\u00f9 1 ko$ i 2 we 6 upravlva 4 s 7 isus 1 All alignment-clusters from all sentences are summarized as columns in the sparse matrix WA, defined as WA ij = 1 when word w i is part of alignment A j , and is 0 elsewhere. 6 We also establish the 'book-keeping' matrix AS to keep track of which alignment belongs to which sentence, defined as AS ij = 1 when the alignment A i occurs in sentence S j , and as 0 elsewhere. The alignment matrix WA is the basic information to be used for language comparison. For example, the product WA \u2022 WA T represents a sparse version of the words \u00d7 words similarity matrix WW.",
"cite_spans": [
{
"start": 604,
"end": 605,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "A more interesting usage of WA is to derive a similarity between the alignments AA. We define both a sparse version of AA, based on the number of words that co-occur in a pair of alignments, and a statistical version of AA, based on the average similarity between the words in the two alignments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "AA sparse = WA T \u2022 WA AA statistical = WA T \u2022 WW \u2022 WA WA T \u2022 1 WW \u2022 WA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The AA matrices will be used to select suitable alignments from the parallel texts to be used for language comparison. Basically, the statistical AA will be used to identify similar alignments within a single sentence and the sparse AA will be used to identify similar alignments across different sentences. Using a suitable selection of alignments (we here use the notation A for a selection of alignments 7 ), a similarity between languages LL can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "LL = LA \u2022 LA T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "by defining LA ('languages \u00d7 alignments') as the number of words per language that occur in each selected alignment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "LA = WL T \u2022 WA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The similarity between two languages LL is then basically defined as the number of times words are attested in the selected alignments for both languages. It thus gives an overview of how structurally similar two languages are, where languages are considered to have a more similar structure the more words they share in the alignment clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Parallel corpora have received a lot of attention since the advent of statistical machine translation (Brown et al., 1988) where they serve as training material for the underlying alignment models. For this reason, the last two decades have seen an increasing interest in the collection of parallel corpora for a number of language pairs (Hansard 8 ), also including text corpora which contain texts in three or more languages (OPUS 9 , Europarl 10 , Multext-East 11 ). Yet there are only few resources which comprise texts for which translations are available into many different languages. Such texts are here referred to as 'massively parallel texts' (MPT; cf. Cysouw and W\u00e4lchli (2007) ). The most well-known MPT is the Bible, which has a long tradition in being used as the basis for language comparison. Apart from that, other religious texts are also available online and can be used as MPTs. One of them is a collection of pamphlets of the Jehova's Witnesses, some of which are available for over 250 languages.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Brown et al., 1988)",
"ref_id": null
},
{
"start": 664,
"end": 689,
"text": "Cysouw and W\u00e4lchli (2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "In order to test our methods on a variety of languages, we collected a number of pamphlets from the Watchtower website http://www. watchtower.org) together with their translational equivalents for 146 languages in total. The texts needed some preprocessing to remove HTML markup, and they were aligned with respect to the paragraphs according to the HTML markup. We extracted all paragraphs which consisted of only one sentence in the English version and contained exactly one English question word (how, who, where, what, why, whom, whose, when, which) and a question mark at the end. From these we manually excluded all sentences where the \"question word\" is used with a different function (e.g., where who is a relative pronoun rather than a question word). In the end we were left with 252 questions in the English version and the corresponding sentences in the 145 other languages. Note that an English interrogative sentence is not necessarily translated as a question in each other language (e.g., the English question what is the truth about God? is simply translated into German as die Wahrheit\u00fcber Gott 'the truth about God'). However, such translations appear to be exceptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "As a first step to show that our method yields promising results we ran the method for the 27 Indo-European languages in our sample in order to see what kind of global language similarity arises when using the present approach. In our procedure, each sentence is separated into various multilingual alignments. Because the structures of languages are different, not each alignment will span across all languages. Most alignments will be 'sparse', i.e., they will only include words from a subset of all languages included. In total, we obtained 6, 660 alignments (i.e., 26.4 alignments per sentence on average), with each alignment including on average 9.36 words. The number of alignments per sentence turns out to be linearly related to the average number of words per sentence, as shown in Fig. 1 . A linear interpolation results in a slope of 2.85, i.e., there are about three times as many alignments per sentence as the average number of words. We expect that this slope depends on the number of languages that are included in the analysis: the more languages, the steeper the slope. We use the LL matrix as the similarity matrix for languages including all 6, 660 alignments. For each language pair this matrix contains the number of times words from both languages are attested in the same alignment. This similarity matrix is converted into a distance matrix by subtracting the similarity value from the highest value that occurs in the matrix:",
"cite_spans": [],
"ref_spans": [
{
"start": 793,
"end": 799,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Global comparison of Indo-European",
"sec_num": "5.1"
},
{
"text": "LL dist = max(LL) \u2212 LL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global comparison of Indo-European",
"sec_num": "5.1"
},
{
"text": "This distance matrix LL dist is transformed into a NeighborNet visualization for an inspection of the structures that are latent in the distance matrix. The NeighborNet in Fig. 2 reveals an approximate grouping of languages according to the major language families, the Germanic family on the right, the Romance family on the top and the Slavic family at the bottom. Note that the sole Celtic language in our sample, Welsh, is included inside the Germanic languages, closest to English. This might be caused by horizontal influence from English on Welsh. Further, the only Baltic language in our sample, Lithuanian, is grouped with the Slavic languages (which is phylogenetically expected behavior in line with Gray and Atkinson (2003) ), though note that it is grouped particularly close to Russian and Polish, which suggests more recent horizontal transfer. Interestingly, the separate languages Albanian and Greek roughly group together with two languages from the other families: Romanian (Romance) and Bulgarian (Slavic). This result is not in line with their phylogenetic relatedness but rather reflects a contact situation in which all four languages are part of the Balkan Sprachbund.",
"cite_spans": [
{
"start": 711,
"end": 735,
"text": "Gray and Atkinson (2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 172,
"end": 178,
"text": "Fig. 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Global comparison of Indo-European",
"sec_num": "5.1"
},
{
"text": "Although the NeighborNet visualization exhibits certain outcomes that do not correspond to the attested genealogical relationship of the languages, the method still fares pretty well based on a visual inspection of the resulting Neighbor-Net. In the divergent cases, the groupings can be explained by the fact that the languages are influenced by the surrounding languages (as is most clear for the Balkan languages) through direct language contact. As mentioned before, a similar problem also exists when using word lists to infer phylogenetic trees when loanwords introduce noise into the calculations and thus lead to a closer relationship of languages than is genealogically tenable. However, in the case of our alignments the influence of language contact is not related to loanwords but to the borrowing of similar constructions or structural features. In the Balkan case, linguists have noted over one hundred such shared structural features, among them the loss of the infinitive, syncretism of dative and genitive case and postposed articles (cf. Joseph (1992) and references therein). These features are particularly prone to lead to a higher similarity in our approach where the alignment of words within sentences is sensitive to the fact that certain word forms are identical or different even though the exact form of the word is not relevant.",
"cite_spans": [
{
"start": 1051,
"end": 1069,
"text": "(cf. Joseph (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global comparison of Indo-European",
"sec_num": "5.1"
},
{
"text": "A second experiment we conducted involved a closer study of just a few questions in the data at hand to obtain a better impression of the results of the alignment procedure. For this experiment, we took the same 252 questions for a worldwide sample of 50 languages. After running the whole procedure, we selected just the six sentences in the sample that were formulated in English with a who interrogative, i.e., questions as to the person who did something. We expected to be able to find all translations of English who in the alignments. Interestingly, this is not what happened. The six alignments that comprised the English who only included words in 23 to 30 other languages in the sample, so we are clearly not finding all translations of who. By using a clustering on AA statistical we were able to find seven more alignments that appear to be highly similar to the six alignments including English who. Together, these 13 alignments included words for almost all languages in the six sentences (on average 47.7 words for each sentence). We computed a language similarity LL only on the basis of these 13 alignments, which represents a typology of the structure of PERSON interrogatives. This typology clearly separates into two clusters of languages, two 'types' so to speak, as can be seen in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1304,
"end": 1310,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Typology of PERSON interrogatives",
"sec_num": "5.2"
},
{
"text": "Investigating the reason for these two types, it turns out that the languages in the right cluster of Fig. 3 consistently separate the six sentences into two groups. The first, second, and fourth sentence are differently marked than the third, fifth and sixth sentence. For example, Finnish uses ketk\u00e4 vs. kuka and Spanish qui\u00e9nes vs. qui\u00e9n. These are both oppositions in number, suggesting that all languages in the right cluster of Fig. 3 distinguish between a singular and a plural form of who. Interpreting the meaning of the English sentences quoted above, this distinction makes complete sense. The Ewe form amekawoe in example vi. (see Section 3) contains the plural marker -wo, which distinguishes it from the singular form and indeed correctly clusters together with qui\u00e9nes in the alignment cluster 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 108,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 434,
"end": 440,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Typology of PERSON interrogatives",
"sec_num": "5.2"
},
{
"text": "This example shows that it is possible to use parallel texts to derive a typology of languages for a highly specific characteristic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typology of PERSON interrogatives",
"sec_num": "5.2"
},
{
"text": "One major problem with using our approach for phylogentic reconstruction is the influence of language contact. Traits of the languages which are not inherited from a common proto-language but are transmitted through contact situations lead to noise in the similarity matrix which does not reflect a genealogical signal. However, other methods also suffer from the shortcoming that language contact cannot be automatically subtracted from the comparison of languages without manual input (such as manually created cognate lists). With translational equivalents, a further problem for the present approach is the influence of translationese on the results. If one version in a language is a direct translation of another language, the structural similarity might get a higher score due to the fact that constructions will be literally translated which otherwise would be expressed differently in that language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "The experiments that have been presented in this paper are only a first step. However, we firmly believe that a multilingual alignment of words is more appropriate for a large-scale comparison of languages than an iterative bilingual alignment. Yet so far we do not have the appropriate evaluation method to prove this. We therefore plan to include a validation scheme in order to test how much can be gained from the simultaneous analysis of more than two languages. Apart from this, we intend to improve the alignment method itself by integrating techniques from statistical alignment models, like adding morpheme separation or phrase structures into the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Another central problem for the further development of this method is the selection of alignments for the language comparison. As our second experiment showed, just starting from a selection of English words will not automatically generate the corresponding words in the other languages. It is possible to use the AA matrices to search for further similar alignments, but this procedure is not yet formalized enough to automatically produce language classification for selected linguistic domains (like for the PERSON interrogatives in our experiment). When this step is better understood, we will be able to automatically generate typological parameters for a large number of the world's languages, and thus easily produce more data on which to base future language comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "A related approach is discussed in W\u00e4lchli (2011). The biggest difference to the present approach is that W\u00e4lchli only compares languages pairwise. In addition, he makes use of a global glossing method and not an alignment of words within the same parallel sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For instance, the alignment in 2. above contains the four words {ko$ i, who, min, wer}, which are thus marked with 1 whereas all other words have 0 in this column of the WA matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the prime in this case does not stand for the transpose of a matrix, as it is sometimes used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.isi.edu/natural-language/ download/hansard/ 9 http://opus.lingfil.uu.se 10 http://www.statmt.org/europarl/ 11 http://nl.ijs.si/ME/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by the DFG project \"Algorithmic corpus-based approaches to typological comparison\". We are grateful to four anonymous reviewers for their valuable comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to language translation",
"authors": [
{
"first": "Paul",
"middle": [
"S"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th International Conference on Computational Linguistics (COLING-88)",
"volume": "",
"issue": "",
"pages": "71--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mercer, and Paul S. Roossin. 1988. A statistical approach to language translation. In Proceedings of the 12th International Conference on Computa- tional Linguistics (COLING-88), pages 71-76.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explaining Language Change: An Evolutionary Approach",
"authors": [
{
"first": "William",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Croft. 2000. Explaining Language Change: An Evolutionary Approach. Harlow: Longman.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parallel texts: using translational equivalents in linguistic typology",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "W\u00e4lchli",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "60",
"issue": "",
"pages": "95--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Cysouw and Bernhard W\u00e4lchli. 2007. Paral- lel texts: using translational equivalents in linguis- tic typology. Sprachtypologie und Universalien- forschung STUF, 60(2):95-99.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Structural phylogenetics and the reconstruction of ancient language history",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Terrill",
"suffix": ""
},
{
"first": "Ger",
"middle": [],
"last": "Reesink",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Foley",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"C"
],
"last": "Levinson",
"suffix": ""
}
],
"year": 2005,
"venue": "Science",
"volume": "309",
"issue": "5743",
"pages": "2072--2077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Dunn, Angela Terrill, Ger Reesink, R. A. Fo- ley, and Steve C. Levinson. 2005. Structural phylo- genetics and the reconstruction of ancient language history. Science, 309(5743):2072-5, 9.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Clustering by passing messages between data points",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Delbert",
"middle": [],
"last": "Frey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dueck",
"suffix": ""
}
],
"year": 2007,
"venue": "Science",
"volume": "315",
"issue": "",
"pages": "972--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan J. Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science, 315:972-976.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Height Figure 3: Hierarchical cluster using Ward's minimum variance method (created with R, R Development Core Team (2010)) depicting a typology of languages according to the structure of their PERSON interrogatives",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Height Figure 3: Hierarchical cluster using Ward's minimum variance method (created with R, R Development Core Team (2010)) depicting a typology of languages according to the structure of their PERSON interrogatives",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language-tree divergence times support the Anatolian theory of Indo-European origin",
"authors": [
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2003,
"venue": "Nature",
"volume": "426",
"issue": "",
"pages": "435--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Ana- tolian theory of Indo-European origin. Nature, 426:435-439.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Application of phylogenetic networks in evolutionary studies",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Huson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2006,
"venue": "Molecular Biology and Evolution",
"volume": "23",
"issue": "2",
"pages": "254--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel H. Huson and David Bryant. 2006. Applica- tion of phylogenetic networks in evolutionary stud- ies. Molecular Biology and Evolution, 23(2):254- 267.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Balkan languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joseph",
"suffix": ""
}
],
"year": 1992,
"venue": "International Encyclopedia of Linguistics",
"volume": "",
"issue": "",
"pages": "153--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian D. Joseph. 1992. The Balkan languages. In William Bright, editor, International Encyclopedia of Linguistics, pages 153-155. Oxford: Oxford Uni- versity Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Human language as a culturally transmitted replicator",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Pagel",
"suffix": ""
}
],
"year": 2009,
"venue": "Nature Reviews Genetics",
"volume": "10",
"issue": "",
"pages": "405--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Pagel. 2009. Human language as a culturally transmitted replicator. Nature Reviews Genetics, 10:405-415.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "R: A language and environment for statistical computing",
"authors": [
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wolff",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2nd International Workshop on Computational Approaches to Collocations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uwe Quasthoff and Christian Wolff. 2002. The poisson collocation measure and its applications. In Proceedings of the 2nd International Workshop on Computational Approaches to Collocations, Vi- enna, Austria. R Development Core Team, 2010. R: A language and environment for statistical computing. Wien: R Foundation for Statistical Computing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text-translation alignment: Three languages are better than two",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP/VLC-99",
"volume": "",
"issue": "",
"pages": "2--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Simard. 1999. Text-translation alignment: Three languages are better than two. In Proceed- ings of EMNLP/VLC-99, pages 2-11.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text-translation alignment: Aligning three or more versions of a text",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
}
],
"year": 2000,
"venue": "Parallel Text Processing: Alignment and Use of Translation Corpora",
"volume": "",
"issue": "",
"pages": "49--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Simard. 2000. Text-translation alignment: Aligning three or more versions of a text. In Jean V\u00e9ronis, editor, Parallel Text Processing: Align- ment and Use of Translation Corpora, pages 49-67. Dordrecht: Kluwer Academic Publishers.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A pipeline for computational historical linguistics",
"authors": [
{
"first": "Lydia",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"F"
],
"last": "Stadler",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2011,
"venue": "Language Dynamics and Change",
"volume": "1",
"issue": "1",
"pages": "89--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lydia Steiner, Peter F. Stadler, and Michael Cysouw. 2011. A pipeline for computational historical linguistics. Language Dynamics and Change, 1(1):89-127.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bitext Alignment",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2011. Bitext Alignment. Morgan & Claypool Publishers.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Quantifying inner form: A study in morphosemantics",
"authors": [
{
"first": "Bernhard",
"middle": [],
"last": "W\u00e4lchli",
"suffix": ""
}
],
"year": 2011,
"venue": "Arbeitspapiere. Bern: Institut f\u00fcr Sprachwissenschaft",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernhard W\u00e4lchli. 2011. Quantifying inner form: A study in morphosemantics. Arbeitspapiere. Bern: Institut f\u00fcr Sprachwissenschaft.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Linear relation between the average number of words per sentence and number of alignments per sentence"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "NeighborNet (created with SplitsTree, Huson and Bryant (2006)) of all Indo-European languages in the sample"
}
}
}
}