File size: 86,263 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:13:11.569127Z"
    },
    "title": "Restoring the Sister: Reconstructing a Lexicon from Sister Languages using Neural Machine Translation",
    "authors": [
        {
            "first": "Remo",
            "middle": [],
            "last": "Nitschke",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "The University of Arizona",
                "location": {}
            },
            "email": "nitschke@email.arizona.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The historical comparative method has a long history in historical linguists. It describes a process by which historical linguists aim to reverse-engineer the historical developments of language families in order to reconstruct proto-forms and familial relations between languages. In recent years, there have been multiple attempts to replicate this process through machine learning, especially in the realm of cognate detection (List et al., 2016; Ciobanu and Dinu, 2014; Rama et al., 2018). So far, most of these experiments aimed at actual reconstruction have attempted the prediction of a proto-form from the forms of the daughter languages (Ciobanu and Dinu, 2018; Meloni et al., 2019). Here, we propose a reimplementation that uses modern related languages, or sisters, instead, to reconstruct the vocabulary of a target language. In particular, we show that we can reconstruct vocabulary of a target language by using a fairly small data set of parallel cognates from different sister languages, using a neural machine translation (NMT) architecture with a standard encoderdecoder setup. This effort is directly in furtherance of the goal to use machine learning tools to help under-served language communities in their efforts at reclaiming, preserving, or reconstructing their own languages.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The historical comparative method has a long history in historical linguists. It describes a process by which historical linguists aim to reverse-engineer the historical developments of language families in order to reconstruct proto-forms and familial relations between languages. In recent years, there have been multiple attempts to replicate this process through machine learning, especially in the realm of cognate detection (List et al., 2016; Ciobanu and Dinu, 2014; Rama et al., 2018). So far, most of these experiments aimed at actual reconstruction have attempted the prediction of a proto-form from the forms of the daughter languages (Ciobanu and Dinu, 2018; Meloni et al., 2019). Here, we propose a reimplementation that uses modern related languages, or sisters, instead, to reconstruct the vocabulary of a target language. In particular, we show that we can reconstruct vocabulary of a target language by using a fairly small data set of parallel cognates from different sister languages, using a neural machine translation (NMT) architecture with a standard encoderdecoder setup. This effort is directly in furtherance of the goal to use machine learning tools to help under-served language communities in their efforts at reclaiming, preserving, or reconstructing their own languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Historical linguistics has long employed the historical comparative method to establish familial connections between languages and to reconstruct proto-forms (cf. Klein et al., 2017b; Meillet, 1967) . More recently, the comparative method has been employed by revitalization projects for lexical reconstruction of lost lexical items (cf. Delgado et al., 2019) . In the particular case of Delgado et al. (2019) , lost lexical items of the target language are reconstructed by using equivalent cognates of stillspoken modern sister languages, i.e., languages in the same language family that share some established common ancestor language and a significant amount of cognates with the target language. By reverse-engineering the historical phonological processes that happened between the target language and the sister-languages, one can predict what the lexical item in the target language should be. This is essentially a twist on the comparative method, using the same principles, but to reconstruct a modern sister, as opposed to a proto-antecedent.",
                "cite_spans": [
                    {
                        "start": 163,
                        "end": 183,
                        "text": "Klein et al., 2017b;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 184,
                        "end": 198,
                        "text": "Meillet, 1967)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 338,
                        "end": 359,
                        "text": "Delgado et al., 2019)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 388,
                        "end": 409,
                        "text": "Delgado et al. (2019)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While neural net systems have been used to emulate the historical comparative method 1 to reconstruct proto-forms (Meloni et al., 2019; Ciobanu and Dinu, 2018) and for cognate detection (List et al., 2016; Ciobanu and Dinu, 2014; Rama et al., 2018) , there have not, to the best of our knowledge, been any attempts to use neural nets to predict/reconstruct lexical items of a sister language for revitalization/reconstruction purposes. Meloni et al. (2019) report success for a similar task (reconstructing Latin proto-forms) by using cognate pattern lists as a training input. Instead of reconstructing Latin proto-forms from only Italian roots, they use Italian, Spanish, Portuguese, Romanian and French cognates of Latin, i.e., mapping from many languages to one. As our intended usecase (see section 1.1) is one that suffers from data sparsity, we explicitly explore the degree to which expanding the list of sister-languages in the manyto-one mapping can compensate for fewer available data-points. Since the long-term goal of this project is to aid language revitalization efforts, the question of available data is of utmost importance. Machine learning often requires vast amounts of data, and languages which are undergoing revitalization usually have very sparse amounts of data available. Hence, the goal for a machine learning approach 1 Due to the nature of neural nets we do not know whether these systems actually emulate the historical comparative method or not. What is meant here is that they were used for the same tasks.",
                "cite_spans": [
                    {
                        "start": 114,
                        "end": 135,
                        "text": "(Meloni et al., 2019;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 136,
                        "end": 159,
                        "text": "Ciobanu and Dinu, 2018)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 186,
                        "end": 205,
                        "text": "(List et al., 2016;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 206,
                        "end": 229,
                        "text": "Ciobanu and Dinu, 2014;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 230,
                        "end": 248,
                        "text": "Rama et al., 2018)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 436,
                        "end": 456,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1348,
                        "end": 1349,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "here is not necessarily the highest possible accuracy, but rather the ability to operate with as little data as possible, while still retaining a reasonable amount of accuracy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our particular contributions are:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "1. We demonstrate an approach for reframing the historical comparative method to reconstruct a target language from its sisters using a neural machine translation framework. We show that this can be done with easily accessible open source frameworks such as OpenNMT (Klein et al., 2017a) .",
                "cite_spans": [
                    {
                        "start": 266,
                        "end": 287,
                        "text": "(Klein et al., 2017a)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "2. We provide a detailed analysis of the degree to which inputs from additional sister languages can overcome issues of data sparsity. We find that adding more related languages allows for higher accuracy with fewer data points. However, we also find that blindly adding languages to the input stream does not always yield said higher accuracy. The results suggest that there needs to be a significant amount of cognates with the added input language and the target language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This experiment was designed with a specific usecase in mind: Lexical reconstruction for language revitalization projects. Specifically, the situation where this type of model may be most applicable would be a language reclamation project in the definition of Leonhard (2007) or a language revival process in the definition of Mc-Carty and Nicholas (2014) . In essence, a language where there is some need to recover or reconstruct a lexicon. An example of such a case might be the Wampanoag language reclamation project (https://www.wlrp.org/), or comparable projects using the methods outlined in Delgado et al. (2019) . As this is a proof-of-concept, we use the Romance language family, specifically the nonendangered languages of French, Spanish, Italian, Portuguese and Romanian, and operate under assumption that these results can inform how one can use this approach with other languages of interest. However, we are aware that the Romance language morphology may be radically different from some of the languages that may be in the scope of this use case, such as agglutinative and polysynthetic languages, and that we cannot fully predict the performance of this type of system for such languages from the Romance example. Regardless of this, some insights gained here will still be applicable in those cases, such as the question of compensating lack of data by using multiple languages.",
                "cite_spans": [
                    {
                        "start": 260,
                        "end": 275,
                        "text": "Leonhard (2007)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 327,
                        "end": 355,
                        "text": "Mc-Carty and Nicholas (2014)",
                        "ref_id": null
                    },
                    {
                        "start": 599,
                        "end": 620,
                        "text": "Delgado et al. (2019)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Intended Use-Case and Considerations",
                "sec_num": "1.1"
            },
            {
                "text": "Languages that are the focus of language revitalization projects are typically not targets for deep learning projects. One of the reasons for this is the fact that these languages usually do not have large amounts of data available for training state of the art neural approaches. These systems need large amounts of data, and Neural Machine Translation systems, as the one used in this project, are no exception. For example, Cho et al. (2014) use data sets varying between 5.5million and 348million words. However, the task of proto-form reconstruction, which is really a task of cognate prediction, can be achieved with fairly small datasets, if parallel language input is used. This was shown by Meloni et al. (2019) , whose system predicted 84% within an edit distance of 1, meaning that 84% of the predictions were so accurate that only one or 0 edits were necessary to achieve the true target. For example, if the target output is \"grazie\", the machine might predict \"grazia\" (one edit) or \"grazie\" (0 edits). Within a language revitalization context, this level of accuracy would actually be a very good outcome. In this scenario, a linguist or speaker familiar with the language would vet the output regardless, so small edit distances should not pose a big problem. Further, all members of a language revitalization project or language community would ultimately vet the output, as they would make a decision on whether to accept or reject the output as a lexical item of the language.",
                "cite_spans": [
                    {
                        "start": 427,
                        "end": 444,
                        "text": "Cho et al. (2014)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 700,
                        "end": 720,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Intended Use-Case and Considerations",
                "sec_num": "1.1"
            },
            {
                "text": "This begs the question of why a language revitalization project would want to go through the trouble of using such an algorithm in the first place, if they have someone available to vet the output, then that person may as well do the reconstructive work themselves, as proposed in Delgado et al. (2019) . This all depends on two factors: First, how high is the volume of lexical items that need to be reconstructed or predicted? The effort may not be worth it for 10 or even a 100 lexical items, but beyond this an neural machine translation model can potentially outperform the manual labor. Once trained, the model can make thousands of predictions in minutes, as long as input data is available.",
                "cite_spans": [
                    {
                        "start": 281,
                        "end": 302,
                        "text": "Delgado et al. (2019)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Intended Use-Case and Considerations",
                "sec_num": "1.1"
            },
            {
                "text": "Second, and potentially more important, it will depend on how well the historical phonological relationships between the languages are understood. For a family like Romance, we have a very good understanding of the historical genesis of the languages and the different phonological processes they underwent, see for example Maiden et al. (2013) . However, there are many language families in the world where these relationships and histories are less than clear. In such situations, a machine learning approach would be beneficial, because the algorithm learns 2 the relationships for us and gives predictions that just need to be vetted. Under this perspective, the best model might not necessarily be the one that produces the most accurate output, but perhaps the one that produces the fewest incorrigible mistakes. An incorrigible mistake here would be the algorithm predicting an item that is completely unrelated to the target root e.g., predicting \"cinque\" for a target of \"grazie\"). Further, ease of usability and accessibility will be another factor for this kind of use-case, as not every project of this type will have a computational linguist to call on. Hence, another aim should be a low-threshold for reproducability and the utilization of easy to use open-source frameworks. In the spirit of the latter, all data and code necessary to reproduce the results are open-source and freely available. This paper is intended for computational linguists and linguists and/or community members who are involved with projects surrounding languages which might benefit from this approach. As such, it is written with both audiences in mind, with Section 6 (\"Warning Labels for Interested Linguists\") specifically aimed at linguists and community members interested in a potential application of this method.",
                "cite_spans": [
                    {
                        "start": 324,
                        "end": 344,
                        "text": "Maiden et al. (2013)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Intended Use-Case and Considerations",
                "sec_num": "1.1"
            },
            {
                "text": "The data set used for this experiment was provided by Shauli Rafvogel of Meloni et al. (2019) . The initial set consisted of 5420 lines of cognate sextuples of the Romance language family, specifically: Ro- manian, French, Spanish, Portuguese, Italian and Latin. As the aim for this experiment was to reconstruct from sister languages to a sister language, the Latin items were removed from the set and instead Italian was chosen to be the target language for the experiment, since it had the most complete pattern with respect to the other languages in the set. Table  1 illustrates the types of lines present in the initial dataset.",
                "cite_spans": [
                    {
                        "start": 73,
                        "end": 93,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 563,
                        "end": 571,
                        "text": "Table  1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "The Dataset",
                "sec_num": "2"
            },
            {
                "text": "Lines with no target and lines with no input were removed. Lines where there was a target but no input (row 3) were also removed, as well as lines where there was input but no target (line 1). After the removal of all lines which lead to empty patterns in the Italian set, and all lines which were empty patterns in the input, 3527 remained. From these, 2466 lines were taken as training data, 345 were taken for validation, and 717 were set aside for testing. Meloni et al. (2019) use both an orthographic and an IPA data set, and show that the orthographic set yielded more accurate results. Here, we use only orthographic representations, which we prefer not for accuracy, but because orthographic datasets are more easily acquired for most languages, particularly those of interest in language reclamation projects. If both an IPA set and an orthographic set are available, one may attempt using both to boost the accuracy of the results. Chen (2018) showed that this is possible with glossing data in the case of sentence level neural machine translation. We will discuss this implementation in Section 5.2.",
                "cite_spans": [
                    {
                        "start": 461,
                        "end": 481,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Dataset",
                "sec_num": "2"
            },
            {
                "text": "See Figure 1 for a very simplified phylogenetic tree representation of the familial relations of the Romance languages used in this dataset. This tree was constructed using data from glottolog (Hammarstr\u00f6m et al., 2020) , and is included just for illustrative purposes and not as a statement about the phylogeny of Romance languages. 3",
                "cite_spans": [
                    {
                        "start": 193,
                        "end": 219,
                        "text": "(Hammarstr\u00f6m et al., 2020)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 4,
                        "end": 12,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Dataset",
                "sec_num": "2"
            },
            {
                "text": "This experiment was run using the OpenNMTpytorch neural machine translation (Klein et al., 2017a) framework, using the default settings (a 2layer LSTM with 500 hidden units on both the encoder and decoder). The opennmt-py default setup was chosen intentionally; the envisioned use-case requires an easily reproducable approach for interested users or communities who might profit from using this method for their own purposes, but who don't necessarily have deep expertise in machine learning or tuning neural models. A publicly available toolkit, like opennmt, and a no-configuration setup helps lower the bar to entry for these parties.",
                "cite_spans": [
                    {
                        "start": 76,
                        "end": 97,
                        "text": "(Klein et al., 2017a)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "3"
            },
            {
                "text": "Neural machine translation (NMT) frameworks are designed to translate sentences from one language to another, but they can be used for a number of sequential data tasks (Neubig, 2017) . One such task is the prediction of a cognate from a set of input words, as used here. These frameworks are typically an encoder-decoder setup, where both the encoder and decoder are often implemented as LSTM (Long Short-Term Memory) networks (Hochreiter and Schmidhuber, 1997) , which have the advantage of effectively capturing long-distance dependencies (Neubig, 2017) . In an encoder-decoder setup, the encoder reads in the character based input representation and transforms it into a vector representation. The decoder takes this vector representation and transforms it into a character based output representation (Cho et al., 2014) .",
                "cite_spans": [
                    {
                        "start": 169,
                        "end": 183,
                        "text": "(Neubig, 2017)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 428,
                        "end": 462,
                        "text": "(Hochreiter and Schmidhuber, 1997)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 542,
                        "end": 556,
                        "text": "(Neubig, 2017)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 806,
                        "end": 824,
                        "text": "(Cho et al., 2014)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "3"
            },
            {
                "text": "NMT frameworks also employ a \"vocabulary\" set, which contains vocabulary of the language that is being translated from and vocabulary of the language that is being translated to. The size of this vocabulary is often an issue for the effectiveness of NMT models (Hirschmann et al., 2016) . In our case, the source vocabulary simply contains all of the characters that occur in all the input language examples and the target vocabulary contains the characters that occur in the target language example. To illustrate: if this task was about predicting English words, then the target vocabulary would contain all the letters of the English alphabet.",
                "cite_spans": [
                    {
                        "start": 261,
                        "end": 286,
                        "text": "(Hirschmann et al., 2016)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "3"
            },
            {
                "text": "Since the input in our case is a list of cognates from different languages, we need to consider how we feed this input to the machine. There are two obvious options for this task. We can either feed the cognates one by one, or we can merge the cognates first, before feeding them to the machine. In this experiment, we merge the words character by character to construct the input lines. This means that for every line in the input, the first character of each word was concatenated, then the second character of each word was concatenated, and so on. For an illustration:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input Concatenation",
                "sec_num": "3.1"
            },
            {
                "text": "(1) patterns in the input: aille, alha, al, aie",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input Concatenation",
                "sec_num": "3.1"
            },
            {
                "text": "(2) target patterns: aglia",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input Concatenation",
                "sec_num": "3.1"
            },
            {
                "text": "(3) input: aaaaillilhelae (4) target: aglia This merging delivered marginally better results than simple concatenation in early testing, which is why it was selected. It is unclear as to why this is the case. We suspect that the merged input makes it easier for the model to recognize if the same characters appear in the same position of the input, as is the case with \"a\" in the initial position in the above example. However, we are cautious to recommend this input representation in general, because different morphologies may be better represented in a concatenation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input Concatenation",
                "sec_num": "3.1"
            },
            {
                "text": "To determine the performance gains from simply having more data versus having data from more languages, we create several training scenarios. In each, we use the same aforementioned 2-layer LSTM. To understand the benefit of additional language, we first train with the entire training set with all four languages, then successively remove languages from the input set until only one remains. Next, to compare this to the impact of simply having fewer data points, but from all languages, we generate several impoverished versions of the data set. For these impoverished versions, lines were removed randomly 4 from the set reducing the data by 70%, 50%, 30% and 10% respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Different Training Setups",
                "sec_num": "3.2"
            },
            {
                "text": "Machine translation is usually evaluated using the BLEU (Papineni et al., 2002) score, but BLEU is designed with sentence level translations in mind. We instead evaluate the output according to edit distance in the style of Meloni et al. (2019) by calculating the percentage of the output which is within a given edit distance. In addition to this metric, we also use a custom evaluation metric designed to emphasize the usability of the output for the intended use-case, i.e., as predictions to be vetted by an expert to save time over doing the entire analysis manually. In order to calculate this score, we calculate the Damerau-Levenshtein edit distance to the target for each word and assign weights to them by their edit distance. That is:",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 79,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 224,
                        "end": 244,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Measures",
                "sec_num": "4"
            },
            {
                "text": "score = (a + b * .9 + c * .8 + d * .7 + e * .6)/t",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Measures",
                "sec_num": "4"
            },
            {
                "text": "where a is number of predictions with distance 0, b is the number with distance 1, c is the number with distance 2, d is the number with distance 3, e the number with distance 4, and t is the total number of predictions. As an example, consider a scenario where there are three predicted cognates. If system 1 produces 3 output patterns within an edit distance of 2, it would receive a score of 0.8. If system 2 produces two output patterns with edit distance 0 and one within a distance of 5, this would result in a score of 0.67.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Measures",
                "sec_num": "4"
            },
            {
                "text": "The logic behind this metric is that any prediction with an edit distance larger than 4 edits is essentially useless for the proposed task. Since such a large edit distance essentially constitutes an incorrigible mistake as mentioned in (Section 1.1). The edit distance of 4 constitutes an arbitrary cut-off to a degree, but it allows us a simple and informative evaluation metric for our use case. This metric will rank a model that has a large number of items in a and a large number of items beyond 4 edits lower than a model with items mostly in the b-d range. Presumably, the latter is more useful to the task, as small errors can be adjusted by linguists or language users.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Measures",
                "sec_num": "4"
            },
            {
                "text": "Using this metric, we can rank different input combinations according to their assumed useful-ness to the task of lexical reconstruction for revitalization purposes. Table 2 shows the edit distance percentages and scores of different runs at 10,000 steps of training. 5 We can compare the difference in outcome between using fewer languages in the input versus using less input lines overall. This addresses the question of whether adding multiple languages to the input helps compensate for fewer data points (cognate pairs). The runs with successively reduced numbers of languages (top half of the table), are all trained with all available input lines (2466) but excluding specific columns/languages. The \"reduced input\" runs (bottom half of the table), on the other hand, are done with all four languages but with fewer cognates, by excluding rows. These runs had the following amount of training input lines: 10%: 2220 lines of input, 30%: 1793 lines of input, 50%: 1345 lines of input, 70%: 896 lines of input (recall that the total number of input lines available for training was 2466). All runs were tested on the same testing data target.",
                "cite_spans": [
                    {
                        "start": 268,
                        "end": 269,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 166,
                        "end": 173,
                        "text": "Table 2",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation Measures",
                "sec_num": "4"
            },
            {
                "text": "In Table 2 (see following page), we can observe that, unsurprisingly, the training sample with the most languages and data (Span-Fre-Port-Ro) performs best. 44.6% within edit distance 0 means that almost half the predictions the machine makes are correct. In terms of accuracy, this is not incredible, Meloni et al. (2019) report 64.1% within edit distance 0. However, considering that we are using a data set approximately a third the size of theirs for training (2466 cognates compared with 7038), the performance is surprisingly good. The more important measure for the intended use-case is the fact that over 80% of items are within an edit distance of 3, meaning that of the output produced, 80% need only three edits or fewer to meet the target.",
                "cite_spans": [
                    {
                        "start": 302,
                        "end": 322,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 2",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "We can also observe that the performance successively drops as we remove languages, with the Spanish only 6 performing worst. However, the way in which this performance drops is not entirely transparent. It appears that in terms of scoring, the Spanish-French (Spa-Fre) sample actu- ally performs better than the Spanish-Portuguese-Romanian sample. Further, while Span-Port-Ro has significantly better values in the 0-2 edit range, it is outperformed by Span-Fre in terms of score because Span-Fre has more items in the \u2264 4 edit range.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "The noticeable difference between Span-Fre-Port and Span-Port-Ro is surprising and warrants some examination. The likely explanation is twofold. First, The Romanian set is the one with the most empty patterns. The Romanian training data only includes 930 filled patterns, in comparison, Portuguese includes 1905 patterns, French includes 1790, and Spanish has 2125. It may be the case that the Romanian data is too small in comparison with the others to have a significant impact on the outcome. The other factor may be that Romanian is phylogenetically the most distant from the target language (Italian) (Figure 1 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 606,
                        "end": 615,
                        "text": "(Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "This becomes even more apparent in Figure 2 , which. shows the performance of different models over time. 7 Here we can observe that there is hardly any difference between the performance of Span-Fre-Port-Ro and Span-Fre-Port over time, and it is only at 10,000 steps that they start to diverge. This divergence at the 10,000 step mark is likely random, the graph suggest that their overall performance is almost identical in regards to scoring. Another point in this direction are the seemingly convergent graphs of Span-Fre and Span-Port-Ro, suggesting that there is no difference between using 2 or 3 languages as input if the third language is Romanian.",
                "cite_spans": [
                    {
                        "start": 106,
                        "end": 107,
                        "text": "7",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 35,
                        "end": 43,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "Discounting the performance of the exclusion/inclusion of Romanian, we can observe that performance overall tends to increase with each parallel language added. This is especially evident with the obvious drop-off in performance of the Spanish only input. If we assume that Romanian has no impact, then we can see that 3 languages (blue and orange) perform similarly and two languages (red and green) perform similarly, and there is an obvious drop-off between those two patterns. This suggests that using parallel language input can compensate smaller datasets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "Due to the small dataset, the scores plateau fairly early, around the 3000 epoch mark for most. This suggests that it would be sufficient to run these models at 3000 epochs, which would save some time on low-end hardware. However, with these small datasets, training time should rarely exceed 5 hours on consumer grade PCs. 8",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5"
            },
            {
                "text": "Let us now consider the second question of this paper: Can parallel language input compensate for small dataset size? We know that performance reduces if we reduce the number of languages in the input mix. Now we compare this drop-off to the reduction in performance caused by reducing the overall amount of input data. This can be seen in Figure 3 , which shows the performance at different training steps for models trained on decreasing amounts of data. Included for comparison are models trained on all data using all four (Span-Fra-Port-Ro), three (Span-Port-Ro), and one (Span) input language. 9 First, we observe that a 10% reduction in training data (grey) does not seem to have a strong impact, as this performs mostly equal to Span-Fre-Port-Ro. Further, we can see is that the 30% reduced case performs marginally better than Span-Port-Ro. This is a good result, as it suggests that we can compensate for a fair amount of data by using additional languages. Essentially, in this case we can observe that removing a language from the input can be equivalent to removing 30% of the input or more. Even the 50% reduced case (brown) still performs better than using just one language (Spanish only).",
                "cite_spans": [
                    {
                        "start": 600,
                        "end": 601,
                        "text": "9",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 340,
                        "end": 348,
                        "text": "Figure 3",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Parallel Languages vs Input Reduction",
                "sec_num": "5.1"
            },
            {
                "text": "The extreme fall-off between the 50% reduction and the 70% reduction suggests that there is some point beyond which even multiple languages cannot compensate for lack of data points. Where this fall-off point is exactly, will likely fluctuate depending on the data set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Languages vs Input Reduction",
                "sec_num": "5.1"
            },
            {
                "text": "Chen 2018shows that neural machine translation tasks can be greatly improved by adding glossing data to the input mix (We will gloss over the technical details of the implementation here). While there is no direct equivalent to the gloss-sentence relationship, there might be a close analog for words: phonetic transcriptions. Orthography may be conservative and often misleading, but phonetic representations are not. Meloni et al. (2019) use a phonetic dataset in their experiment, but they map from phonetic representations to phonetic representations, so their input and their target items are represented in IPA. This performs worse than the orthographic task. An interesting further experiment would be to blend orthographic representations and phonetic representations in the input, in the style of Chen (2018), mapping that to an orthographic output. This would be a close analog to the sentence-gloss to sentence mapping that Chen (2018) reports success with.",
                "cite_spans": [
                    {
                        "start": 419,
                        "end": 439,
                        "text": "Meloni et al. (2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Potential Improvements",
                "sec_num": "5.2"
            },
            {
                "text": "One thing to consider, is that this may be not ideal for the use-case. Phonetic datasets are not easy to produce and the orthography is often more readily available. While this might improve performance, needing a phonetic as well as an orthographic dataset would likely increase the threshold of reproducability for interested parties.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Potential Improvements",
                "sec_num": "5.2"
            },
            {
                "text": "There are some important aspects of this kind of approach that linguists, or community members who are interested in utilizing it for their purposes, should be aware of. There are certain things that this type of approach can and cannot do for a community or project. The model does not so much reconstruct a word for the community, but rather proposes what the word could be, according to the data it has been fed. The model will propose these recommendations on the basis of an abstract notion of what the historic phonological and morphological differences are between languages ABC and language D. This does not necessarily mean that the model learns or understands the historical phonological and morphological processes that separate the input sister languages from the target languages. It has simply learned a way to generalize from the input to the output with some degree of accuracy. What is learned need not necessarily overlap with what linguists believe to have happened.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "Therefore, this type of model will only ever generate cognates of the input. It cannot generate novel items. This is an important factor to consider for any community or linguist planning on using this approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "Consider the following case: Imagine we are trying to use this approach to reconstruct English from other Germanic languages. A large part of the English lexicon is not of Germanic ancestry. However, any lexicon we would try to reconstruct using this trained algorithm would give us approximations of a Germanic derived lexeme for the word we are trying to reconstruct. This is a potentially undesirable effect of the way the model was trained. Linguists and interested community members need to be aware of this and implement their own quality control.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "However, this approach can potentially be useful for any language project where a community and or linguists are working with an incomplete lexicon for a language. The prerequisite for this being a useful tool in such a scenario is the assumption that the sister languages to the target language are somewhat well documented and have at least dictionaries available from which data can be extracted. A final prerequisite is the presence of minimally a small dictionary of the target language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "The model would then be trained using the sister languages as input, and the target language list as a target output. After training confirms a reasonable accuracy, the model can then be fed with other known words in the sister language to get a prediction of those words in the target language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "After producing said output, the linguist, or language community, needs to subject the output to a quality control and decide on a series of questions: Do the output patterns match what we know of the target language? Can we assume that these words are cognates in the target language, or is there some evidence that other forms were present? Finally, if this is used by a community to fill in empty patterns in their language, the community needs to decide whether the output is something that the community wants in their language. The algorithm is not infallible, and only proposes. Ultimately, a language community using this tool must make a decision whether to accept or reject the algorithm's recommendations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Warning Labels for Interested Linguists",
                "sec_num": "6"
            },
            {
                "text": "In this paper, we have shown that NMT frameworks can be used to predict cognates of a target language from cognates of its sister languages. We have further shown that adding or removing input languages has interesting effects on the accuracy of the model. This indicates that we can use additional sister languages to compensate the lack of data in a given situation, though, as demonstrated in the case of Romanian, we cannot blindly add sister languages, nor assume that all additions are equally useful. This might be a promising method for situations where not a lot of data is present, but there are multiple well-documented related languages of the target language. The next step for this line of research is to move from a proof of concept to an implementation in an actual language revitalization scenario. This is something we are currently working on. A further question that need to be addressed as well, is how well this approach performs with languages that exhibit a different morphology from the Romance languages, such as agglutinative and polysynthetic languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "7"
            },
            {
                "text": "All code and data used for this project are opensource and can be found here, in order to reproduce these results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "7"
            },
            {
                "text": "Something we would like to address in this final paragraphs is that machine learning is a potential tool. Like every tool, it has its uses and cases where it is not useful. The decision of using such a tool to expand the lexicon of a language is a decision of that language community, and not of a linguist.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "7"
            },
            {
                "text": "We also acknowledge that tree representations are not necessarily the most accurate way to represent these relationships(Kaylan and Fran\u00e7ois, 2019).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This was done by simply removing every n th line depending on how much reduction was needed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "One step of training means that the algorithm has gone through one batch of input lines. The default batch-size for opennmt is 64.6 Spanish only was only trained for 5000 steps, as the model plateaus around 1000 steps. The performance of the Spanish only model was measured every 500 steps forFigure 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This can give a better representation of the performance, because a neural net constantly adjusts its weights, so looking at just one point in time can be deceiving.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "These were trained on an i5-5200 CPU with 2.2GHz, and training took anywhere between 4-7 hours for 10,000 steps.9 Since inFigure 2we observe that Span-Port-Ro and Span-Fre perform quite similarly, and Span-Fre-Port performs similarly to Span-Fre-Port-Fro, to make the graph easier to read, we remove Span-Fre and Span-Fre-Port from this graph.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The author would like to thank the anonymous reviewers for their comments and give special thanks to Becky Sharp for helping with last minute edits.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Improving Neural Net Machine Translation Systems with Linguistic Information",
                "authors": [
                    {
                        "first": "Yuan",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yuan Lu Chen. 2018. Improving Neural Net Ma- chine Translation Systems with Linguistic Informa- tion. Phd thesis, University of Arizona.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "On the properties of neural machine translation: Encoder-decoder approaches",
                "authors": [
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Bart",
                        "middle": [],
                        "last": "Van Merrienboer",
                        "suffix": ""
                    },
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "KyungHyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. CoRR, abs/1409.1259.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Automatic detection of cognates using orthographic alignment",
                "authors": [
                    {
                        "first": "Alina",
                        "middle": [
                            "Maria"
                        ],
                        "last": "Ciobanu",
                        "suffix": ""
                    },
                    {
                        "first": "Liviu",
                        "middle": [
                            "P"
                        ],
                        "last": "Dinu",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "99--105",
                "other_ids": {
                    "DOI": [
                        "10.3115/v1/P14-2017"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Alina Maria Ciobanu and Liviu P. Dinu. 2014. Auto- matic detection of cognates using orthographic align- ment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 99-105, Baltimore, Maryland. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Ab initio: Automatic Latin proto-word reconstruction",
                "authors": [
                    {
                        "first": "Alina",
                        "middle": [
                            "Maria"
                        ],
                        "last": "Ciobanu",
                        "suffix": ""
                    },
                    {
                        "first": "Liviu",
                        "middle": [
                            "P"
                        ],
                        "last": "Dinu",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 27th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1604--1614",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alina Maria Ciobanu and Liviu P. Dinu. 2018. Ab ini- tio: Automatic Latin proto-word reconstruction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1604-1614, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Digital documentation training for long island algonquian community language researchers: a new paradigm for community linguistics",
                "authors": [
                    {
                        "first": "Leighton",
                        "middle": [],
                        "last": "Delgado",
                        "suffix": ""
                    },
                    {
                        "first": "Irene",
                        "middle": [],
                        "last": "Navas",
                        "suffix": ""
                    },
                    {
                        "first": "Conor",
                        "middle": [],
                        "last": "Quinn",
                        "suffix": ""
                    },
                    {
                        "first": "Tina",
                        "middle": [],
                        "last": "Tarrant",
                        "suffix": ""
                    },
                    {
                        "first": "Wunetu",
                        "middle": [],
                        "last": "Tarrant",
                        "suffix": ""
                    },
                    {
                        "first": "Harry",
                        "middle": [],
                        "last": "Wallace",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Presented at: 51st Algonquian Conference",
                "volume": "",
                "issue": "",
                "pages": "24--27",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Leighton Delgado, Irene Navas, Conor Quinn, Tina Tarrant, Wunetu Tarrant, and Harry Wallace. 2019. Digital documentation training for long island al- gonquian community language researchers: a new paradigm for community linguistics. Presented at: 51st Algonquian Conference, McGill University, Montr\u00e9al, QC, 24-27 October.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Martin Haspelmath",
                "authors": [
                    {
                        "first": "Harald",
                        "middle": [],
                        "last": "Hammarstr\u00f6m",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [],
                        "last": "Forkel",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.5281/zenodo.4061162"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Harald Hammarstr\u00f6m, Robert Forkel, Martin Haspel- math, and Sebastian Bank. 2020. Glottolog 4.3. Jena.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "What makes word-level neural machine translation hard: A case study on English-German translation",
                "authors": [
                    {
                        "first": "Fabian",
                        "middle": [],
                        "last": "Hirschmann",
                        "suffix": ""
                    },
                    {
                        "first": "Jinseok",
                        "middle": [],
                        "last": "Nam",
                        "suffix": ""
                    },
                    {
                        "first": "Johannes",
                        "middle": [],
                        "last": "F\u00fcrnkranz",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
                "volume": "",
                "issue": "",
                "pages": "3199--3208",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fabian Hirschmann, Jinseok Nam, and Johannes F\u00fcrnkranz. 2016. What makes word-level neural machine translation hard: A case study on English- German translation. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 3199- 3208, Osaka, Japan. The COLING 2016 Organizing Committee.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Long short-term memory",
                "authors": [
                    {
                        "first": "Sepp",
                        "middle": [],
                        "last": "Hochreiter",
                        "suffix": ""
                    },
                    {
                        "first": "J\u00fcrgen",
                        "middle": [],
                        "last": "Schmidhuber",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Neural Computation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Freeing the comparative method from the tree model: A framework for historical glottometry. In Let's talk about trees: Genetic relationships of languages and their phylogenetic representation",
                "authors": [
                    {
                        "first": "Siva",
                        "middle": [],
                        "last": "Kaylan",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandre",
                        "middle": [],
                        "last": "Fran\u00e7ois",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Siva Kaylan and Alexandre Fran\u00e7ois. 2019. Freeing the comparative method from the tree model: A framework for historical glottometry. In Let's talk about trees: Genetic relationships of languages and their phylogenetic representation. Cambridge Uni- versity Press, online.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Opennmt: Open-source toolkit for neural machine translation",
                "authors": [
                    {
                        "first": "Guillaume",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    },
                    {
                        "first": "Yoon",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Yuntian",
                        "middle": [],
                        "last": "Deng",
                        "suffix": ""
                    },
                    {
                        "first": "Jean",
                        "middle": [],
                        "last": "Senellart",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [
                            "M"
                        ],
                        "last": "Rush",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proc. ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P17-4012"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017a. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Handbook of Comparative and Historical Indo-European Linguistics : An International Handbook",
                "authors": [
                    {
                        "first": "Jared",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    },
                    {
                        "first": "Brian",
                        "middle": [],
                        "last": "Joseph",
                        "suffix": ""
                    },
                    {
                        "first": "Matthias",
                        "middle": [],
                        "last": "Fritz",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jared Klein, Brian Joseph, and Matthias Fritz. 2017b. Handbook of Comparative and Historical Indo- European Linguistics : An International Handbook. De Gruyter, Berlin.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Miami Language Reclamation in the Home: A Case Study",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Wesley",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Leonhard",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wesley Y. Leonhard. 2007. Miami Language Reclama- tion in the Home: A Case Study. Phd thesis, Univer- sity of California, Berkeley.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Using sequence similarity networks to identify partial cognates in multilingual wordlists",
                "authors": [
                    {
                        "first": "Johann-Mattis",
                        "middle": [],
                        "last": "List",
                        "suffix": ""
                    },
                    {
                        "first": "Philippe",
                        "middle": [],
                        "last": "Lopez",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Bapteste",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "599--605",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P16-2097"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Johann-Mattis List, Philippe Lopez, and Eric Bapteste. 2016. Using sequence similarity networks to iden- tify partial cognates in multilingual wordlists. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 599-605.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The Cambridge History of the Romance Languages",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Maiden",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ledgeway",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "2",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Maiden, J. Smith, and A. Ledgeway, editors. 2013. The Cambridge History of the Romance Languages, volume 2. Cambridge University Press, Cambridge.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Reclaiming indigenous languages: A reconsideration of the roles and responsibilities of schools",
                "authors": [
                    {
                        "first": "Teresa",
                        "middle": [
                            "L"
                        ],
                        "last": "Mccarty",
                        "suffix": ""
                    },
                    {
                        "first": "Sheilah",
                        "middle": [
                            "E"
                        ],
                        "last": "Nicholas",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Teresa L. McCarty and Sheilah E. Nicholas. 2014. Re- claiming indigenous languages: A reconsideration of the roles and responsibilities of schools. Review of Research in Education, 31.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "The comparative method in historical linguistics",
                "authors": [
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Meillet",
                        "suffix": ""
                    }
                ],
                "year": 1967,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Antoine Meillet. 1967. The comparative method in historical linguistics. Librairie Honor\u00e9 Champion, Paris.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Ab antiquo: Proto-language reconstruction with rnns",
                "authors": [
                    {
                        "first": "Carlo",
                        "middle": [],
                        "last": "Meloni",
                        "suffix": ""
                    },
                    {
                        "first": "Shauli",
                        "middle": [],
                        "last": "Ravfogel",
                        "suffix": ""
                    },
                    {
                        "first": "Yoav",
                        "middle": [],
                        "last": "Goldberg",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carlo Meloni, Shauli Ravfogel, and Yoav Goldberg. 2019. Ab antiquo: Proto-language reconstruction with rnns.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Neural machine translation and sequence-to-sequence models: A tutorial",
                "authors": [
                    {
                        "first": "Graham",
                        "middle": [],
                        "last": "Neubig",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Graham Neubig. 2017. Neural machine translation and sequence-to-sequence models: A tutorial. CoRR, abs/1703.01619.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Bleu: a method for automatic evaluation of machine translation",
                "authors": [
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "Salim",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "Todd",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Jing",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {
                    "DOI": [
                        "10.3115/1073083.1073135"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? CoRR",
                "authors": [
                    {
                        "first": "Taraka",
                        "middle": [],
                        "last": "Rama",
                        "suffix": ""
                    },
                    {
                        "first": "Johann-Mattis",
                        "middle": [],
                        "last": "List",
                        "suffix": ""
                    },
                    {
                        "first": "Johannes",
                        "middle": [],
                        "last": "Wahle",
                        "suffix": ""
                    },
                    {
                        "first": "Gerhard",
                        "middle": [],
                        "last": "J\u00e4ger",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard J\u00e4ger. 2018. Are automatic meth- ods for cognate detection good enough for phyloge- netic reconstruction in historical linguistics? CoRR, abs/1804.05416.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "An abridged family tree of the relevant Romance languages. Adapted from glottolog(Hammarstr\u00f6m et al., 2020)."
            },
            "FIGREF1": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Performance at different training steps for models with different combinations of input languages, plotted by custom score. All scores are calculated from the testing data."
            },
            "FIGREF2": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Performance of models trained on all four languages, but with varying levels of downsampled data. Included for comparison are models trained with all data on different language combinations. Plotted is the custom score over steps. Scores are calculated every 1000 training steps. All models were run on OpenNMTpy default parameters."
            },
            "TABREF1": {
                "type_str": "table",
                "text": "Examples of data patterns, including types of data removed during cleanup (e.g., rows 1 and 3).",
                "num": null,
                "html": null,
                "content": "<table/>"
            },
            "TABREF3": {
                "type_str": "table",
                "text": "63% 57.74% 69.6% 80.33% 88.42% 0.82 Span-Fre-Port 42.68% 53.27% 68.34% 77.68% 84.94% 0.78 Span-Port-Ro 42.54% 53.28% 66.39% 74.76% 81.59% 0.75",
                "num": null,
                "html": null,
                "content": "<table><tr><td>Edit Distance</td><td>0</td><td>\u22641</td><td>\u2264 2</td><td>\u22643</td><td>\u22644</td><td>score</td></tr><tr><td colspan=\"2\">Span-Fre-Port-Ro 44.Span-Fre 39.9%</td><td colspan=\"4\">50.9% 63.88% 74.62% 83.4%</td><td>0.76</td></tr><tr><td>Spanish only</td><td colspan=\"6\">35.6% 47.98% 60.25% 69.03% 74.76% 0.68</td></tr><tr><td colspan=\"6\">10% Reduced Input 40.17% 54.25% 69.6% 81.31% 87.59%</td><td>0.8</td></tr><tr><td colspan=\"7\">30% Reduced Input 39.75% 50.91% 66.11% 73.36% 83.12% 0.77</td></tr><tr><td colspan=\"6\">50% Reduced Input 33.19% 45.61% 60.95% 71.27% 82.4%</td><td>0.75</td></tr><tr><td colspan=\"3\">70% Reduced Input 17.02% 26.08%</td><td>41%</td><td colspan=\"3\">50.77% 65.97% 0.59</td></tr></table>"
            },
            "TABREF4": {
                "type_str": "table",
                "text": "Edit distance percentiles at 10,000 training steps. Shown are the results from using all data points with different combinations of languages (top), as well as using all languages but with random downsampling of the data from each (bottom). All scores are calculated from the testing data.",
                "num": null,
                "html": null,
                "content": "<table/>"
            }
        }
    }
}