Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
86.2 kB
{
"paper_id": "W12-0216",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:12:01.216796Z"
},
"title": "LexStat: Automatic Detection of Cognates in Multilingual Wordlists",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Institute for Romance Languages and Literature Heinrich Heine University",
"location": {
"settlement": "D\u00fcsseldorf",
"country": "Germany"
}
},
"email": "listm@phil.uni-duesseldorf.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, a new method for automatic cognate detection in multilingual wordlists will be presented. The main idea behind the method is to combine different approaches to sequence comparison in historical linguistics and evolutionary biology into a new framework which closely models the most important aspects of the comparative method. The method is implemented as a Python program and provides a convenient tool which is publicly available, easily applicable, and open for further testing and improvement. Testing the method on a large gold standard of IPAencoded wordlists showed that its results are highly consistent and outperform previous methods.",
"pdf_parse": {
"paper_id": "W12-0216",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, a new method for automatic cognate detection in multilingual wordlists will be presented. The main idea behind the method is to combine different approaches to sequence comparison in historical linguistics and evolutionary biology into a new framework which closely models the most important aspects of the comparative method. The method is implemented as a Python program and provides a convenient tool which is publicly available, easily applicable, and open for further testing and improvement. Testing the method on a large gold standard of IPAencoded wordlists showed that its results are highly consistent and outperform previous methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "During the last two decades there has been an increasing interest in automatic approaches to historical linguistics, which is reflected in the large amount of literature on phylogenetic reconstruction (e.g. Ringe et al., 2002; Gray and Atkinson, 2003; Brown et al., 2008) , statistical aspects of genetic relationship (e.g. Baxter and Manaster Ramer, 2000; Kessler, 2001; Mortarino, 2009) , and phonetic alignment (e.g. Kondrak, 2002; Proki\u0107 et al., 2009; List, forthcoming) .",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "Ringe et al., 2002;",
"ref_id": "BIBREF19"
},
{
"start": 227,
"end": 251,
"text": "Gray and Atkinson, 2003;",
"ref_id": "BIBREF9"
},
{
"start": 252,
"end": 271,
"text": "Brown et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 324,
"end": 356,
"text": "Baxter and Manaster Ramer, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 357,
"end": 371,
"text": "Kessler, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 372,
"end": 388,
"text": "Mortarino, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 420,
"end": 434,
"text": "Kondrak, 2002;",
"ref_id": "BIBREF15"
},
{
"start": 435,
"end": 455,
"text": "Proki\u0107 et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 456,
"end": 474,
"text": "List, forthcoming)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the supporters of these new automatic methods would certainly agree that their greatest advantage lies in the increase of repeatability and objectivity, it is interesting to note that the most crucial part of the analysis, namely the identification of cognates in lexicostatistical datasets, is still almost exclusively carried out manually. That this may be problematic was recently shown in a comparison of two large lexicostatistical datasets pro-duced by different scholarly teams where differences in item translation and cognate judgments led to topological differences of 30% and more (Geisler and List, forthcoming) . Unfortunately, automatic approaches to cognate detection still lack the precision of trained linguists' judgments. Furthermore, most of the methods that have been proposed so far only deal with bilingual as opposed to multilingual wordlists.",
"cite_spans": [
{
"start": 598,
"end": 629,
"text": "(Geisler and List, forthcoming)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The LexStat method, which will be presented in the following, is a convenient tool which not only closely renders the most important aspects of manual approaches but also yields transparent decisions that can be directly compared with the results achieved by the traditional methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In historical linguistics, cognacy is traditionally determined within the framework of the comparative method (Trask, 2000, 64-67) . The final goal of this method is the reconstruction of protolanguages, yet the basis of the reconstruction itself rests on the identification of cognate words or morphemes within genetically related languages. Within the comparative method, cognates in a given set of language varieties are identified by applying a recursive procedure. First an initial list of putative cognate sets is created by comparing semantically and phonetically similar words from the languages to be investigated. In most of the literature dealing with the comparative method, the question of which words are most suitable for the initial compilation of cognate lists is not explicitly addressed, yet it seems obvious that the comparanda should belong to the basic vocabulary of the languages. Based on this cognate list, an ini-tial list of putative sound correspondences (correspondence list) is created. Sound correspondences are determined by aligning the cognate words and searching for sound pairs which repeatedly occur in similar positions of the presumed cognate words. After these initial steps have been made, the cognate list and the correspondence list are modified by 1. adding and deleting cognate sets from the cognate list depending on whether or not they are consistent with the correspondence list, and 2. adding and deleting sound correspondences from the correspondence list, depending on whether or not they find support in the cognate list.",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "(Trask, 2000, 64-67)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Comparative Method",
"sec_num": "2.1"
},
{
"text": "These steps are repeated until the results seem satisfying enough such that no further modifications, neither of the cognate list, nor of the correspondence list, seem to be necessary. The specific strength of the comparative method lies in the similarity measure which is applied for the identification of cognates: Sequence similarity is determined on the basis of systematic sound correspondences (Trask, 2000, 336) as opposed to similarity based on surface resemblances of phonetic segments. Thus, comparing English token [t\u0259\u028ak\u0259n] and German Zeichen [\u02a6a\u026a\u00e7\u0259n] 'sign', the words do not really sound similar, yet their cognacy is assumed by the comparative method, since their phonetic segments can be shown to correspond regularly within other cognates of both languages. 1 Lass (1997, 130) calls this notion of similarity genotypic as opposed to a phenotypic notion of similarity, yet the most crucial aspect of correspondence-based similarity is that it is language-specific: Genotypic similarity is never defined in general terms but always with respect to the language systems which are being compared. Correspondence relations can therefore only be established for individual languages, they can never be taken as general statements. This may seem to be a weakness, yet it turns out that the genotypic similarity notion is one of the most crucial strengths of the comparative method: Not only does it allow us to dive deeper in the history of languages in cases where phonetic change has corrupted the former identity of cognates to such an extent that no sufficient surface similarity is left, it also makes it easier to distinguish borrowed from commonly inherited items, since the former usually come along with a greater degree of phenotypic similarity.",
"cite_spans": [
{
"start": 400,
"end": 418,
"text": "(Trask, 2000, 336)",
"ref_id": null
},
{
"start": 526,
"end": 534,
"text": "[t\u0259\u028ak\u0259n]",
"ref_id": null
},
{
"start": 781,
"end": 787,
"text": "(1997,",
"ref_id": null
},
{
"start": 788,
"end": 792,
"text": "130)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Comparative Method",
"sec_num": "2.1"
},
{
"text": "In contrast to the language-specific notion of similarity that serves as the basis for cognate detection within the framework of the comparative method, most automatic methods seek to determine cognacy on the basis of surface similarity by calculating the phonetic distance or similarity between phonetic sequences (words, morphemes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "The most popular distance measures are based on the paradigm of sequence alignment. In alignment analyses two or more sequences are arranged in a matrix in such a way that all corresponding segments appear in the same column, while empty cells of the matrix, resulting from noncorresponding segments, are filled with gap symbols (Gusfield, 1997, 216) . In order to retrieve a distance or a similarity score from such an alignment analysis, the matched residue pairs, i.e. the segments which appear in the same column of the alignment, are compared and given a specific score depending on their similarity. How the phonetic segments are scored depends on the respective scoring function which is the core of all alignment analyses. Thus, the scoring function underlying the edit distance only distinguishes identical from non-identical segments, while the scoring function used in the ALINE algorithm of Kondrak (2002) assigns individual similarity scores for the matching of phonetic segments based on phonetic features.",
"cite_spans": [
{
"start": 329,
"end": 350,
"text": "(Gusfield, 1997, 216)",
"ref_id": null
},
{
"start": 903,
"end": 917,
"text": "Kondrak (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "Using alignment analyses, cognacy can be determined by converting the distance or similarity scores to normalized distance scores and assuming cognacy for distances beyond a certain threshold. The normalized edit distance (NED) of two sequences A and B is usually calculated by dividing the edit distance by the length of the smallest sequence. The normalized distance score of algorithms which yield similarities (such as the ALINE algorithm) can be calculated by the formula of Downey et al. (2008) :",
"cite_spans": [
{
"start": 480,
"end": 500,
"text": "Downey et al. (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "(1) 1 \u2212 2S AB S A + S B ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "where S A and S B are the similarity scores of the sequences aligned with themselves, and S AB is the similarity score of the alignment of both sequences. For the alignment given in Table 1 , the normalized edit distance is 0.6, and the ALINE distance is 0.25. A certain drawback of most of the common alignment methods is that their scoring function defines segment similarity on the basis of phenotypic criteria. The similarity of phonetic segments is determined on the basis of their phonetic features and not on the basis of the probability that their segments occur in a correspondence relation in genetically related languages. An alternative way to calculate phonetic similarity which comes closer to a genotypic notion of similarity is to compare phonetic sequences with respect to their sound classes. The concept of sound classes goes back to Dolgopolsky (1964) . The original idea was \"to divide sounds into such groups, that changes within the boundary of the groups are more probable than transitions from one group into another\" (Burlak and Starostin, 2005, 272) 2 .",
"cite_spans": [
{
"start": 853,
"end": 871,
"text": "Dolgopolsky (1964)",
"ref_id": "BIBREF5"
},
{
"start": 1043,
"end": 1076,
"text": "(Burlak and Starostin, 2005, 272)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "In his original study, Dolgopolsky proposed ten fundamental sound classes, based on an empirical analysis of sound-correspondence frequencies in a sample of 400 languages. Cognacy between two words is determined by comparing the first two consonants of both words. If the sound classes are identical, the words are judged to be cognate. Otherwise no cognacy is assumed. Thus, given the words German Tochter [t\u0254xt\u0259r] 'daughter' and English daughter [d\u0254\u02d0t\u0259r] , the sound class representation of both sequences will be TKTR and TTR, respectively. Since the first two consonants of both words do not match regarding their sound classes, the words are judged to be non-cognate. In recent studies, sound classes have also been used as an internal representation format for pairwise and multiple alignment analyses. The method for sound-class alignment (SCA, cf. List, forthcoming) combines the idea of sound classes with traditional alignment algorithms. In contrast to the original proposal by Dolgopolsky, SCA employs an extended sound-class model which also represents tones and vowels along with a refined scoring scheme that defines specific transition probabilities between sound classes. The benefits of the SCA distance compared to NED can be demonstrated by comparing the distance scores the methods yield for the comparison of the same data. Figure 1 contrasts the scores of NED with SCA distance for the alignment of 658 cognate and 658 non-cognate word pairs between English and German (see Sup. Mat. A). As can be seen from the figure, the scores for NED do not show a very sharp distinction between cognate and noncognate words. Even with a \"perfect\" threshold of 0.8 that minimizes the number of false positive and false negative decisions there are still 13% of incorrect decisions. The SCA scores, on the other hand, show a sharper distinction between scores for cognates and non-cognates. With a threshold of 0.5 the percentage of incorrect decisions decreases to 8%.",
"cite_spans": [
{
"start": 448,
"end": 456,
"text": "[d\u0254\u02d0t\u0259r]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1346,
"end": 1354,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "There are only three recent approaches known to the author which explicitly deal with the task of cognate detection in multilingual wordlists. All methods take multilingual, semantically aligned wordlists as input data. Bergsma and Kondrak (2007) first calculate the longest common subsequence ratio between all word pairs in the input data and then use an integer linear programming approach to cluster the words into cognate sets. Unfortunately, their method is only tested on a dataset containing alphabetic transcriptions; hence, no direct comparison with the method proposed in this paper is possible. Turchin et al. (2010) use the above-mentioned sound-class model and the cognate-identification criterion by Dolgopolsky (1964) to identify cognates in lexicostatistical datasets. Their method is also implemented within LexStat, and the results of a direct comparison will be reported in section 4.3. Steiner et al. (2011) propose an iterative approach which starts by clustering words into tentative cognate sets based on their alignment scores. These preliminary results are then refined by filtering words according to similar meanings, computing multiple alignments, and determining recurrent sound correspondences. The authors test their method on two large datasets. Since no gold standard for their test set is available, they only report intermediate results, and their method cannot be directly compared to the one proposed in this paper.",
"cite_spans": [
{
"start": 220,
"end": 246,
"text": "Bergsma and Kondrak (2007)",
"ref_id": "BIBREF1"
},
{
"start": 715,
"end": 733,
"text": "Dolgopolsky (1964)",
"ref_id": "BIBREF5"
},
{
"start": 907,
"end": 928,
"text": "Steiner et al. (2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approaches",
"sec_num": "2.2"
},
{
"text": "LexStat combines the most important aspects of the comparative method with recent approaches to sequence comparison in historical linguistics and evolutionary biology. The method employs automatically extracted language-specific scoring schemes for computing distance scores from pairwise alignments of the input data. These language-specific scoring schemes come close to the notion of sound correspondences in traditional historical linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStat",
"sec_num": "3"
},
{
"text": "The method is implemented as a part of the LingPy library, a Python library for automatic tasks in quantitative historical linguistics. 3 It can either be used in Python scripts or directly be called from the Python prompt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStat",
"sec_num": "3"
},
{
"text": "The input data are analyzed within a four-step approach: (1) sequence conversion, (2) scoringscheme creation, (3) distance calculation, and (4) sequence clustering. In stage (1), the input sequences are converted to sound classes and their sonority profiles are determined. In stage (2), a permutation method is used to create languagespecific scoring schemes for all language pairs. In stage (3) the pairwise distances between all word pairs, based on the language-specific scoring schemes, are computed. In stage (4), the sequences are clustered into cognate sets whose average distance is beyond a certain threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStat",
"sec_num": "3"
},
{
"text": "The method takes multilingual, semantically aligned wordlists in IPA transcription as input. The input format is a CSV-representation of the way multilingual wordlists are represented in the STARLING software package for lexicostatistical analyses. 4 Thus, the input data are specified in a simple tab-delimited text file with the names of the languages in the first row, an ID for the semantic slots (basic vocabulary items in traditional lexicostatistic terminology) in the first column, and the language entries in the columns corresponding to the language names. The language entries should be given either in plain IPA encoding. Additionally, the file can contain headwords (items) for semantic slots corresponding to the IDs. Synonyms, i.e. multiple entries in one language for a given meaning are listed in separate rows and given the same ID. The output format is the same as the input format except that each language column is accompanied by a column indicating the cognate judgments made by LexStat. Cognate judgments are displayed by assigning a cognate ID to each entry. If entries in the output file share the same cognate ID, they are judged to be cognate by the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input and Output Format",
"sec_num": "3.1"
},
{
"text": "In the stage of sequence conversion, all input sequences are converted to sound classes, and their respective sonority profiles are calculated. Lex-Stat uses the SCA sound-class model by default, yet other sound class models are also available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Conversion",
"sec_num": "3.2"
},
{
"text": "The idea of sonority profiles was developed in List (forthcoming). It accounts for the wellknown fact that certain types of sound changes are more likely to occur in specific prosodic contexts. Based on the sonority hierarchy of Geisler (1992, 30) , the sound segments of phonetic sequences are assigned to different prosodic environments, depending on their prosodic context. The current version of SCA distinguishes seven different prosodic environments. 5 The information regarding sound classes and prosodic context are combined, and each input sequence is further represented as a sequence of tuples, consisting of the sound class and the prosodic environment of the respective phonetic segment. During the calculation, only those segments which are identical regarding their sound class as well as their prosodic context are treated as identical.",
"cite_spans": [
{
"start": 229,
"end": 247,
"text": "Geisler (1992, 30)",
"ref_id": null
},
{
"start": 457,
"end": 458,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Conversion",
"sec_num": "3.2"
},
{
"text": "In order to create language specific scoring schemes, a permutation method is used (Kessler, 2001) . The method compares the attested distribution of residue pairs in phonetic alignment analyses of a given dataset to the expected distribution.",
"cite_spans": [
{
"start": 83,
"end": 98,
"text": "(Kessler, 2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "The attested distribution of residue pairs is derived from global and local alignment analyses of all word pairs whose distance is beyond a certain threshold. The threshold is used to reflect the fact that within the comparative method, recurrent sound correspondences are only established with respect to presumed cognate words, whereas noncognate words or borrowings are ignored. Taking only the best-scoring word pairs for the calculation of the attested frequency distribution increases the accuracy of the approach and helps to avoid false positive matches contributing to the creation of the scoring scheme. Alignment analyses are carried out with help of the SCA method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "While the attested distribution is derived from alignments of semantically aligned words, the expected distribution is calculated by aligning word pairs without regard to semantic criteria. This is achieved by repeatedly shuffling the wordlists and aligning them with help of the same methods which were used for the calculation of the attested distributions. In the default settings, the number of repetitions is set to 1000, yet many tests showed that even the number of 100 repetitions is sufficient to yield satisfying results that do not vary significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "Once the attested and the expected distributions for the segments of all language pairs are calculated, a language-specific score s x,y for each residue pair x and y in the dataset is created using the formula",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "(2) s x,y = 1 r 1 + r 2 ( r 1 log 2 ( a 2 x,y e 2 x,y ) + r 2 d x,y ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "where a x,y is the attested frequency of the segment pair, e x,y is the expected frequency, r 1 and r 2 are scaling factors, and d x,y is the similarity score of the original scoring function which was used to retrieve the attested and the expected distributions. Formula (2) combines different approaches from the literature on sequence comparison in historical linguistics and biology. The idea of squaring the frequencies of attested and expected frequencies was adopted from Kessler (2001, 150) , reflecting \"the general intuition among linguists that the evidence of phoneme recurrence grows faster than linearly\". Using the binary logarithm of the division of attested and expected frequencies of occurrence is common in evolutionary biology to retrieve similarity scores (\"logodds scores\") which are apt for the computation of alignment analyses (Henikoff and Henikoff, 1992) . The incorporation of the alignment scores of the original language-independent scoringscheme copes with possible problems resulting from small wordlists: If the dataset is too small to allow the identification of recurrent sound correspondences, the language-independent alignment scores prevent the method from treating generally probable and generally improbable matchings alike. The ratio of language-specific to languageindependent alignment scores is determined by the scaling factors r 1 and r 2 .",
"cite_spans": [
{
"start": 479,
"end": 498,
"text": "Kessler (2001, 150)",
"ref_id": null
},
{
"start": 853,
"end": 882,
"text": "(Henikoff and Henikoff, 1992)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "As an example of the computation of languagespecific scoring schemes, Table 3 shows attested and expected frequencies along with the resulting similarity scores for the matching of word-initial and word-final sound classes in the KSL testset (see Sup. Mat. B and C). The word-initial and word-final classes T = [t, d] were not for the specific representation of the phonetic segments by both their sound class and their prosodic context, the evidence would be blurred.",
"cite_spans": [
{
"start": 311,
"end": 317,
"text": "[t, d]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Scoring-Scheme Creation",
"sec_num": "3.3"
},
{
"text": "Once the language-specific scoring scheme is computed, the distances between all word pairs are calculated. Here, LexStat uses the \"end-space free variant\" (Gusfield, 1997, 228) of the traditional algorithm for pairwise sequence alignments which does not penalize gaps introduced in the beginning and the end of the sequences. This modification is useful when words contain prefixes or suffixes which might distort the calculation. The alignment analysis requires no further parameters such as gap penalties, since they have already been calculated in the previous step. The similarity scores for pairwise alignments are converted to distance scores following the approach of Downey et al. (2008) The benefits of the language-specific distance scores become obvious when comparing them with general ones. Table 4 gives some examples for non-cognate word pairs taken from the KSL testset (see Sup. Mat. B and C). While the SCA distances for these pairs are all considerably low, as it is suggested by the surface similarity of the words, the language-specific distances are all much higher, resulting from the fact that no further evidence for the matching of specific residue pairs can be found in the data.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Gusfield, 1997, 228)",
"ref_id": null
},
{
"start": 676,
"end": 696,
"text": "Downey et al. (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 805,
"end": 812,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Distance Calculation",
"sec_num": "3.4"
},
{
"text": "In the last step of the LexStat algorithm all sequences occurring in the same semantic slot are clustered into cognate sets using a flat cluster variant of the UPGMA algorithm (Sokal and Michener, 1958) which was written by the author. In contrast to traditional UPGMA clustering, this algorithm terminates when a user-defined threshold of average pairwise distances is reached. 1.00 0.80 0.13 0.10 0.89 0.00 Clusters 1 2 3 3 1 3 Table 5 shows pairwise distances of German, English, Danish, Swedish, Dutch, and Norwegian entries for the item WOMAN taken from the GER dataset (see Sup. Mat. B) along with the resulting cluster decisions of the algorithm when setting the threshold to 0.6.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 437,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Sequence Clustering",
"sec_num": "3.5"
},
{
"text": "In order to test the method, a gold standard was compiled by the author. The gold standard consists of 9 multilingual wordlists conforming to the input format required by LexStat (see Supplementary Material B) . The data was collected from different publicly available sources. Hence, the selection of language entries as well as the manually conducted cognate judgments were carried out independently of the author. Since not all the original sources provided phonetic transcriptions of the language entries, the respective alphabetic entries were converted to IPA transcription by the author. The datasets differ regarding the treatment of borrowings. In some datasets they are explicitly marked as such and treated as non-cognates, in other datasets no explicit distinction between borrowing and cognacy is drawn. Information on the structure and the sources of the datasets is given in Table 6 . ",
"cite_spans": [
{
"start": 184,
"end": 209,
"text": "Supplementary Material B)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 890,
"end": 897,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Gold Standard",
"sec_num": "4.1"
},
{
"text": "Bergsma and Kondrak (2007) test their method for automatic cognate detection by calculating the set precision (PRE), the set recall (REC), and the set F-score (FS): The set precision p is the proportion of cognate sets calculated by the method which also occurs in the gold standard. The set recall r is the proportion of cognate sets in the gold standard which are also calculated by the method, and the set F-score f is calculated by the formula",
"cite_spans": [
{
"start": 12,
"end": 26,
"text": "Kondrak (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "(3) f = 2 pr p + r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "A certain drawback of these scores is that they only check for completely identical decisions re-garding the clustering of words into cognate sets while neglecting similar tendencies. The similarity of decisions can be evaluated by calculating the proportion of identical decisions (PID) when comparing the test results with those of the gold standard. Given all pairwise decisions regarding the cognacy of word pairs inherent in the gold standard and in the testset, the differences can be displayed using a contingency The PID score can then simply be calculated by dividing the sum of true positives and true negatives by the total number of decisions. In an analogous way the proportion of identical positive decisions (PIPD) and the proportion of identical negative decisions (PIND) can be calculated by dividing the number of true positives by the sum of true positives and false negatives, and by dividing the number of false positives by the sum of false positives and true negatives, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "Based on the new method for automatic cognate detection, the 9 testsets were analyzed by Lex-Stat, using a gap penalty of -2 for the alignment analysis, a threshold of 0.7 for the creation of the attested distribution, and 1:1 as the ratio of language-specific to language-independent similarity scores. The threshold for the clustering of sequences into cognate sets was set to 0.6. In order to compare the output of LexStat with other methods, three additional analyses of the datasets were carried out: The first two analyses were based on the calculation of SCA and NED distances of all language entries. Based on these scores all words were clustered into cognate sets using the flat cluster variant of UPGMA with a threshold of 0.4 for SCA distances and a threshold of 0.7 for NED, since these both turned out to yield the best results for these approaches. The third analysis was based on the above-mentioned approach by Turchin et al. (2010) . Since in this approach all decisions re-garding cognacy are either positive or negative, no specific cluster algorithm had to be applied. The results of the tests are summarized in Table 8. As can be seen from the table, LexStat outperforms the other methods in almost all respects, the only exception being the proportion of identical negative decisions (PIND). Since nonidentical negative decisions point to false positives, this shows that -for the given settings of LexStat -the method of Turchin et al. (2010) performs best at avoiding false positive cognate judgments, but it fails to detect many cognates correctly identified by LexStat. 6 Figure 2 gives the separate PID scores for all datasets, showing that LexStat's good performance is prevalent throughout all datasets. The fact that all methods perform badly on the PIE dataset may point to problems resulting from the size of the wordlists: if the dataset is too small and the genetic distance of the languages too large, one may simply lack the evidence to prove cognacy without doubt. LexStat can easily be adjusted to avoid false positives by lowering the threshold for sequence clustering. Using a threshold of 0.5 will yield a PIND score of 0.96, yet the PID score will lower down to 0.82.",
"cite_spans": [
{
"start": 928,
"end": 949,
"text": "Turchin et al. (2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1599,
"end": 1607,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The LexStat method was designed to distinguish systematic from non-systematic similarities. The method should therefore produce less false positive cognate judgments resulting from chance resemblances and borrowings than the other methods. In the KSL dataset borrowings are marked along with their sources. Out of a total of 5600 word pairs, 72 exhibit a loan relation, and 83 are phonetically similar (with an NED score less then 0.6) but unrelated. Table 9 lists the number and the percentage of false positives resulting from undetected borrowings or chance resemblances for the different methods (see also Sup. Mat. D). While LexStat outperforms the other methods regarding the detection of chance resemblances, it is not particularly good at handling borrowings. Lex-Stat cannot per se deal with borrowings, but only with language-specific as opposed to languageindependent similarities. In order to handle borrowings, other methods (such as, e.g., the one by Nelson-Sathi et al., 2011) have to be applied.",
"cite_spans": [
{
"start": 965,
"end": 991,
"text": "Nelson-Sathi et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 9",
"ref_id": "TABREF16"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "LexStat SCA NED Turchin Borr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "36 / 50% 44 / 61% 35 / 49% 38 / 53% Chance R. 14 / 17% 35 / 42% 74 / 89% 26 / 31% ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this paper, a new method for automatic cognate detection in multilingual wordlists has been presented. The method differs from other approaches in so far as it employs language-specific scoring schemes which are derived with the help of improved methods for automatic alignment analyses. The test of the method on a large dataset of wordlists taken from different language families shows that it is consistent regardless of the languages being analyzed and outperforms previous approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In contrast to the black box character of many automatic analyses which only yield total scores for the comparison of wordlists, the method yields transparent decisions which can be directly compared with the traditional results of the comparative method. Apart from the basic ideas of the procedure, which surely are in need of enhancement through reevaluation and modification, the most striking limit of the method lies in the data: If the wordlists are too short, certain cases of cognacy are simply impossible to be detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Compare, for example, English weak [wi\u02d0k] vs. German weich [va\u026a\u00e7] 'soft' for the correspondence of [k] with [\u00e7], and English tongue [t\u028c\u014b] vs. German Zunge [\u02a6\u028a\u014b\u0259] 'tongue' for the correspondence of [t] with [\u02a6].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "My translation, original text: \"[...] \u0432\u044b\u0434\u0435\u043b\u0438\u0442\u044c \u0442\u0430\u043a\u0438\u0435 \u0433\u0440\u0443\u043f\u043f\u044b \u0437\u0432\u0443\u043a\u043e\u0432, \u0447\u0442\u043e \u0438\u0437\u043c\u0435\u043d\u0435\u043d\u0438\u044f \u0432 \u043f\u0440\u0435\u0434\u0435\u043b\u0430\u0445 \u0433\u0440\u0443\u043f\u043f\u044b \u0431\u043e\u043b\u0435\u0435 \u0432\u0435\u0440\u043e\u044f\u0442\u043d\u044b, \u0447\u0435\u043c \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u044b \u0438\u0437 \u043e\u0434\u043d\u043e\u0439 \u0433\u0440\u0443\u043f\u043f\u044b \u0432 \u0434\u0440\u0443\u0433\u0443\u044e\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Online available under http://lingulist.de/ lingpy/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Online available under http://starling. rinet.ru/program.php; a closer description of the software is given inBurlak and Starostin (2005, 270-275)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The different environments are: # (word-initial, cons.), V (word-initial, vow.), C (ascending sonority, cons.), v (maximum sonority, vow.), c (descending sonority, cons.), $ (word-final, cons.), and > (word-final, vow.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Beyond lumping and splitting. Probabilistic issues in historical linguistics",
"authors": [
{
"first": "H",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Baxter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manaster Ramer",
"suffix": ""
}
],
"year": 2000,
"venue": "Time depth in historical linguistics",
"volume": "",
"issue": "",
"pages": "167--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William H. Baxter and Alexis Manaster Ramer. 2000. Beyond lumping and splitting. Probabilistic issues in historical linguistics. In Colin Renfrew, April McMahon, and Larry Trask, editors, Time depth in historical linguistics, pages 167-188. McDonald In- stitute for Archaeological Research, Cambridge.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multilingual cognate identification using integer linear programming",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2007,
"venue": "RANLP Workshop on Acquisition and Management of Multilingual Lexicons",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Grzegorz Kondrak. 2007. Mul- tilingual cognate identification using integer lin- ear programming. In RANLP Workshop on Acqui- sition and Management of Multilingual Lexicons, Borovets, Bulgaria.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automated classification of the world's languages",
"authors": [
{
"first": "Cecil",
"middle": [
"H"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"W"
],
"last": "Holman",
"suffix": ""
},
{
"first": "S\u00f8ren",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "Viveka",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "61",
"issue": "",
"pages": "285--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecil H. Brown, Eric W. Holman, S\u00f8ren Wich- mann, Viveka Velupillai, and Michael Cysouw. 2008. Automated classification of the world's languages. Sprachtypologie und Universalien- forschung, 61(4):285-308.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sravnitel'no-istori\u010deskoe jazykoznanie",
"authors": [
{
"first": "Svetlana",
"middle": [
"A"
],
"last": "Burlak",
"suffix": ""
},
{
"first": "Sergej",
"middle": [
"A"
],
"last": "Starostin",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana A. Burlak and Sergej A. Starostin. 2005. Sravnitel'no-istori\u010deskoe jazykoznanie [Comparative-historical linguistics].",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Gipoteza drevnej\u0161ego rodstva jazykovych semej Severnoj Evrazii s verojatnostej to\u010dky zrenija [A probabilistic hypothesis concerning the oldest relationships among the language families of Northern Eurasia",
"authors": [
{
"first": "Aron",
"middle": [
"B"
],
"last": "Dolgopolsky",
"suffix": ""
}
],
"year": 1964,
"venue": "Voprosy Jazykoznanija",
"volume": "2",
"issue": "",
"pages": "53--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron B. Dolgopolsky. 1964. Gipoteza drevne- j\u0161ego rodstva jazykovych semej Severnoj Evrazii s verojatnostej to\u010dky zrenija [A probabilistic hypoth- esis concerning the oldest relationships among the language families of Northern Eurasia]. Voprosy Jazykoznanija, 2:53-63.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Computational feature-sensitive reconstruction of language relationships: Developing the ALINE distance for comparative historical linguistic reconstruction",
"authors": [
{
"first": "Sean",
"middle": [
"S"
],
"last": "Downey",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Hallmark",
"suffix": ""
},
{
"first": "Murray",
"middle": [
"P"
],
"last": "Cox",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norquest",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Lansing",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Quantitative Linguistics",
"volume": "15",
"issue": "4",
"pages": "340--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean S. Downey, Brian Hallmark, Murray P. Cox, Pe- ter Norquest, and Stephen Lansing. 2008. Com- putational feature-sensitive reconstruction of lan- guage relationships: Developing the ALINE dis- tance for comparative historical linguistic recon- struction. Journal of Quantitative Linguistics, 15(4):340-369.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Beautiful trees on unstable ground. Notes on the data problem in lexicostatistics",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Geisler",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Forthcoming",
"suffix": ""
}
],
"year": null,
"venue": "Die Ausbreitung des Indogermanischen. Thesen aus Sprachwissenschaft",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Geisler and Johann-Mattis List. forthcoming. Beautiful trees on unstable ground. Notes on the data problem in lexicostatistics. In Heinrich Hettrich, ed- itor, Die Ausbreitung des Indogermanischen. Thesen aus Sprachwissenschaft, Arch\u00e4ologie und Genetik. Reichert, Wiesbaden.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Akzent und Lautwandel in der Romania",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Geisler",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Geisler. 1992. Akzent und Lautwandel in der Romania. Narr, T\u00fcbingen.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language-tree divergence times support the Anatolian theory of Indo-European origin",
"authors": [
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2003,
"venue": "Nature",
"volume": "426",
"issue": "6965",
"pages": "435--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Ana- tolian theory of Indo-European origin. Nature, 426(6965):435-439.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Algorithms on strings, trees and sequences",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gusfield",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gusfield. 1997. Algorithms on strings, trees and sequences. Cambridge University Press, Cam- bridge.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Amino acid substitution matrices from protein blocks",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Henikoff",
"suffix": ""
},
{
"first": "Jorja",
"middle": [
"G"
],
"last": "Henikoff",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "89",
"issue": "",
"pages": "10915--10919",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Henikoff and Jorja G. Henikoff. 1992. Amino acid substitution matrices from protein blocks. PNAS, 89(22):10915-10919.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Xi\u00e0nd\u00e0i H\u00e0ny\u01d4 f\u0101ngy\u00e1n y\u012bnk\u00f9",
"authors": [],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u012bng H\u00f3u, editor. 2004. Xi\u00e0nd\u00e0i H\u00e0ny\u01d4 f\u0101ngy\u00e1n y\u012bnk\u00f9 [Phonological database of Chinese dialects].",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The significance of word lists. Statistical tests for investigating historical connections between languages",
"authors": [
{
"first": "Brett",
"middle": [],
"last": "Kessler",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brett Kessler. 2001. The significance of word lists. Statistical tests for investigating historical connec- tions between languages. CSLI Publications, Stan- ford.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Johann-Mattis List. forthcoming. SCA: Phonetic alignment based on sound classes",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 1997,
"venue": "New directions in logic, language, and computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2002. Algorithms for language reconstruction. Dissertation, University of Toronto, Toronto. Roger Lass. 1997. Historical linguistics and language change. Cambridge University Press, Cambridge. Johann-Mattis List. forthcoming. SCA: Phonetic alignment based on sound classes. In Marija Slavkovik and Dan Lassiter, editors, New direc- tions in logic, language, and computation. Springer, Berlin and Heidelberg.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An improved statistical test for historical linguistics",
"authors": [
{
"first": "Cinzia",
"middle": [],
"last": "Mortarino",
"suffix": ""
}
],
"year": 2009,
"venue": "Statistical Methods and Applications",
"volume": "18",
"issue": "2",
"pages": "193--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cinzia Mortarino. 2009. An improved statistical test for historical linguistics. Statistical Methods and Applications, 18(2):193-204.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Networks uncover hidden lexical borrowing in Indo-European language evolution",
"authors": [
{
"first": "Shijulal",
"middle": [],
"last": "Nelson-Sathi",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Geisler",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Fangerau",
"suffix": ""
},
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Royal Society B",
"volume": "278",
"issue": "",
"pages": "1794--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shijulal Nelson-Sathi, Johann-Mattis List, Hans Geisler, Heiner Fangerau, Russell D. Gray, William Martin, and Tal Dagan. 2011. Networks uncover hidden lexical borrowing in Indo-European lan- guage evolution. Proceedings of the Royal Society B, 278(1713):1794-1803.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multiple sequence alignments in linguistics",
"authors": [
{
"first": "Jelena",
"middle": [],
"last": "Proki\u0107",
"suffix": ""
},
{
"first": "Martijn",
"middle": [],
"last": "Wieling",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Nerbonne",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelena Proki\u0107, Martijn Wieling, and John Nerbonne. 2009. Multiple sequence alignments in linguis- tics. In Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Educa- tion, pages 18-25, Stroudsburg, PA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Indo-european and computational cladistics",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Ringe",
"suffix": ""
},
{
"first": "Tandy",
"middle": [],
"last": "Warnow",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2002,
"venue": "Transactions of the Philological Society",
"volume": "100",
"issue": "1",
"pages": "59--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Ringe, Tandy Warnow, and Ann Taylor. 2002. Indo-european and computational cladistics. Trans- actions of the Philological Society, 100(1):59-129.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Japanese dialects",
"authors": [
{
"first": "Hattori",
"middle": [],
"last": "Shir\u014d",
"suffix": ""
}
],
"year": 1973,
"venue": "Diachronic, areal and typological linguistics",
"volume": "",
"issue": "",
"pages": "368--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hattori Shir\u014d. 1973. Japanese dialects. In Henry M. Hoenigswald and Robert H. Langacre, editors, Di- achronic, areal and typological linguistics, pages 368-400. Mouton, The Hague and Paris.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A statistical method for evaluating systematic relationships",
"authors": [
{
"first": "",
"middle": [
"R"
],
"last": "Robert",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"D"
],
"last": "Sokal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michener",
"suffix": ""
}
],
"year": 1958,
"venue": "",
"volume": "28",
"issue": "",
"pages": "1409--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert. R. Sokal and Charles. D. Michener. 1958. A statistical method for evaluating systematic rela- tionships. University of Kansas Scientific Bulletin, 28:1409-1438.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tower of Babel. An etymological database project. Online ressource",
"authors": [
{
"first": "George",
"middle": [],
"last": "Starostin",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Starostin. 2008. Tower of Babel. An etymo- logical database project. Online ressource. URL: http://starling.rinet.ru.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A pipeline for computational historical linguistics",
"authors": [
{
"first": "Lydia",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"F"
],
"last": "Stadler",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2011,
"venue": "Language Dynamics and Change",
"volume": "1",
"issue": "1",
"pages": "89--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lydia Steiner, Peter F. Stadler, and Michael Cysouw. 2011. A pipeline for computational historical linguistics. Language Dynamics and Change, 1(1):89-127.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The dictionary of historical and comparative linguistics",
"authors": [],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert L. Trask, editor. 2000. The dictionary of his- torical and comparative linguistics. Edinburgh Uni- versity Press, Edinburgh.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Analyzing genetic connections between languages by matching consonant classes",
"authors": [
{
"first": "Ilja",
"middle": [],
"last": "Peter Turchin",
"suffix": ""
},
{
"first": "Murray",
"middle": [],
"last": "Peiros",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gell-Mann",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Language Relationship",
"volume": "3",
"issue": "",
"pages": "117--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turchin, Ilja Peiros, and Murray Gell-Mann. 2010. Analyzing genetic connections between lan- guages by matching consonant classes. Journal of Language Relationship, 3:117-126.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Comparison of languages in contact. Institute of Linguistics Academia Sinica",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Wang. 2006. Comparison of languages in contact. Institute of Linguistics Academia Sinica, Taipei.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "SCA Distance vs. NED",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "PID Scores of the Methods 6",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>gives an ex-</td></tr></table>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Alignment Analysis",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"5\">ID Items German English Swedish</td></tr><tr><td>1</td><td>hand</td><td>hant</td><td>haend</td><td>hand</td></tr><tr><td>2</td><td colspan=\"2\">woman fra\u028a</td><td>w\u028am\u0259n</td><td>kvina</td></tr><tr><td>3</td><td>know</td><td>k\u025bn\u0259n</td><td>n\u0259\u028a</td><td>\u00e7\u025bna</td></tr><tr><td>3</td><td>know</td><td>v\u026as\u0259n</td><td>-</td><td>ve\u02d0ta</td></tr></table>",
"num": null,
"text": "gives an example for the possible structure of an input file.",
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>English German Att. Exp. Score</td></tr><tr><td>#[t,d] #[t,d] #[t,d] #[\u03b8,\u00f0] #[t,d] #[t,d] #[\u02a6] #[\u0283,s,z] 1.0 3.0 3.0 7.0 #[\u03b8,\u00f0] #[\u02a6] 0.0 0.0 #[\u03b8,\u00f0] #[s,z] [t,d]$ [t,d]$ 21.0 8.86 6.3 1.24 6.3 0.38 6.0 1.99 -1.5 0.72 6.3 0.25 -1.5 1.33 0.5 [t,d]$ [\u02a6]$ 3.0 1.62 3.9 [t,d]$ 6.0 5.30 1.5 [\u0283,s]$ [\u03b8,\u00f0]$ [t,d]$ 4.0 1.14 4.8 [\u03b8,\u00f0]$ [\u02a6]$ 0.0 0.20 -1.5 [\u03b8,\u00f0]$ [\u0283,s]$ 0.0 0.80 0.5</td></tr></table>",
"num": null,
"text": ", C = [\u02a6], S = [\u0283, s, z]",
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>The spe-</td></tr></table>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>Word Pair</td><td>SCA LexStat</td></tr><tr><td>German Schlange [\u0283la\u014b\u0259] English Snake [sne\u026ak] German Wald [valt] English wood [w\u028ad] German Staub [\u0283taup] English dust [d\u028cst]</td><td>0.44 0.67 0.40 0.64 0.43 0.78</td></tr></table>",
"num": null,
"text": "which was described in section 2.2.",
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"num": null,
"text": "SCA Distance vs. LexStat Distance",
"type_str": "table"
},
"TABREF9": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Pairwise Distance Matrix",
"type_str": "table"
},
"TABREF11": {
"html": null,
"content": "<table/>",
"num": null,
"text": "The Gold Standard",
"type_str": "table"
},
"TABREF13": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Comparing Gold Standard and Testset",
"type_str": "table"
},
"TABREF15": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Performance of the Methods",
"type_str": "table"
},
"TABREF16": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Borrowings and Chance Resemblances",
"type_str": "table"
}
}
}
}