File size: 82,227 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:13:20.754375Z"
    },
    "title": "The REPUcs' Spanish-Quechua Submission to the AmericasNLP 2021 Shared Task on Open Machine Translation",
    "authors": [
        {
            "first": "Oscar",
            "middle": [],
            "last": "Moreno Veliz",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Pontificia Universidad Cat\u00f3lica del Per\u00fa",
                "location": {}
            },
            "email": "omoreno@pucp.edu.pe"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present the submission of REPUcs 1 to the AmericasNLP machine translation shared task for the low resource language pair Spanish-Quechua. Our neural machine translation system ranked first in Track two (development set not used for training) and third in Track one (training includes development data). Our contribution is focused on: (i) the collection of new parallel data from different web sources (poems, lyrics, lexicons, handbooks), and (ii) using large Spanish-English data for pre-training and then fine-tuning the Spanish-Quechua system. This paper describes the new parallel corpora and our approach in detail.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present the submission of REPUcs 1 to the AmericasNLP machine translation shared task for the low resource language pair Spanish-Quechua. Our neural machine translation system ranked first in Track two (development set not used for training) and third in Track one (training includes development data). Our contribution is focused on: (i) the collection of new parallel data from different web sources (poems, lyrics, lexicons, handbooks), and (ii) using large Spanish-English data for pre-training and then fine-tuning the Spanish-Quechua system. This paper describes the new parallel corpora and our approach in detail.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "REPUcs participated in the AmericasNLP 2021 machine translation shared task (Mager et al., 2021) for the Spanish-Quechua language pair. Quechua is one of the most spoken languages in South America (Simons and Fenning, 2019) , with several variants, and for this competition, the target language is Southern Quechua. A disadvantage of working with indigenous languages is that there are few documents per language from which to extract parallel or even monolingual corpora. Additionally, most of these languages are traditionally oral, which is the case of Quechua. In order to compensate the lack of data we first obtain a collection of new parallel corpora to augment the available data for the shared task. In addition, we propose to use transfer learning (Zoph et al., 2016) using large Spanish-English data in a neural machine translation (NMT) model. To boost the performance of our transfer learning approach, we follow the work of Kocmi and Bojar (2018) , which demonstrated that sharing the source language and a vocabulary of subword 1 \"Research Experience for Peruvian Undergraduates -Computer Science\" is a program that connects Peruvian students with researchers worldwide. The author was part of the 2021 cohort: https://www.repuprogram.org/repu-cs. units can improve the performance of low resource languages.",
                "cite_spans": [
                    {
                        "start": 76,
                        "end": 96,
                        "text": "(Mager et al., 2021)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 197,
                        "end": 223,
                        "text": "(Simons and Fenning, 2019)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 758,
                        "end": 777,
                        "text": "(Zoph et al., 2016)",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 938,
                        "end": 960,
                        "text": "Kocmi and Bojar (2018)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Quechua is the most widespread language family in South America, with more than 6 millions speakers and several variants. For the AmericasNLP Shared Task, the development and test sets were prepared using the Standard Southern Quechua writing system, which is based on the Quechua Ayacucho (quy) variant (for simplification, we will refer to it as Quechua for the rest of the paper). This is an official language in Peru, and according to Zariquiey et al. (2019) it is labelled as endangered. Quechua is essentially a spoken language so there is a lack of written materials. Moreover, it is a polysynthetic language, meaning that it usually express large amount of information using several morphemes in a single word. Hence, subword segmentation methods will have to minimise the problem of addressing \"rare words\" for an NMT system.",
                "cite_spans": [
                    {
                        "start": 439,
                        "end": 462,
                        "text": "Zariquiey et al. (2019)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Spanish\u2192Quechua",
                "sec_num": "2"
            },
            {
                "text": "To the best of our knowledge, Ortega et al. (2020b) is one of the few studies that employed a sequence-to-sequence NMT model for Southern Quechua, and they focused on transfer learning with Finnish, an agglutinative language similar to Quechua. Likewise, Huarcaya Taquiri (2020) used the Jehovah Witnesses dataset (Agi\u0107 and Vuli\u0107, 2019) , together with additional lexicon data, to train an NMT model that reached up to 39 BLEU points on Quechua. However, the results in both cases were high because the development and test set are split from the same distribution (domain) as the training set. On the other hand, Ortega and Pillaipakkamnatt (2018) improved alignments for Quechua by using Finnish(an agglutinative language) as the pivot language. The corpus source is the parallel treebank of Rios et al. (Rios et al., 2012) ., so we deduce that they worked with Quechua Cuzco (quz). (Ortega et al., 2020a) In the AmericasNLP shared task, new out-of-domain evaluation sets were released, and there were two tracks: using or not the validation set for training the final submission. We addressed both tracks by collecting more data and pre-training the NMT model with large Spanish-English data.",
                "cite_spans": [
                    {
                        "start": 314,
                        "end": 336,
                        "text": "(Agi\u0107 and Vuli\u0107, 2019)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 806,
                        "end": 825,
                        "text": "(Rios et al., 2012)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 885,
                        "end": 907,
                        "text": "(Ortega et al., 2020a)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Spanish\u2192Quechua",
                "sec_num": "2"
            },
            {
                "text": "3 Data and pre-processing",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Spanish\u2192Quechua",
                "sec_num": "2"
            },
            {
                "text": "In this competition we are going to use the Ameri-casNLP Shared Task datasets and new corpora extracted from documents and websites in Quechua.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Spanish\u2192Quechua",
                "sec_num": "2"
            },
            {
                "text": "For training, the available parallel data comes from dictionaries and Jehovah Witnesses dataset (JW300; Agi\u0107 and Vuli\u0107, 2019) . AmericasNLP also released parallel corpus aligned with English (en) and the close variant of Quechua Cusco (quz) to enhance multilingual learning. For validation, there is a development set made with 994 sentences from Spanish and Quechua (quy) (Ebrahimi et al., 2021) . Detailed information from all the available datasets with their corresponding languages is as follows:",
                "cite_spans": [
                    {
                        "start": 104,
                        "end": 125,
                        "text": "Agi\u0107 and Vuli\u0107, 2019)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 373,
                        "end": 396,
                        "text": "(Ebrahimi et al., 2021)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "AmericasNLP datasets",
                "sec_num": "3.1"
            },
            {
                "text": "\u2022 JW300 (quy, quz, en): texts from the religious domain available in OPUS (Tiedemann, 2012) . JW300 has 121k sentences. The problems with this dataset are misaligned sentences, misspelled words and blank translations. \u2022 MINEDU (quy): Sentences extracted from the official dictionary of the Ministry of Education in Peru (MINEDU). This dataset contains open-domain short sentences. A considerable number of sentences are related to the countryside. It only has 650 sentences. \u2022 Dict_misc (quy): Dictionary entries and samples collected and reviewed by Huarcaya Taquiri (2020). This dataset is made from 9k sentences, phrases and word translations. Furthermore, to examine the domain resemblance, it is important to analyse the similarity between the training and development. Table 1 shows the percentage of the development set tokens that overlap with the tokens in the training datasets on Spanish (es) and Quechua (quy) after deleting all types of symbols.",
                "cite_spans": [
                    {
                        "start": 74,
                        "end": 91,
                        "text": "(Tiedemann, 2012)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 775,
                        "end": 782,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "AmericasNLP datasets",
                "sec_num": "3.1"
            },
            {
                "text": "We observe from Table 1 that the domain of the training and development set are different as the overlapping in Quechua does not even go above 50%. There are two approaches to address this Dataset % Dev overlapping es quy JW300 85% 45% MINEDU 15% 5% Dict_misc 40% 18% Table 1 : Word overlapping ratio between the development and the available training sets in AmericasNLP problem: to add part of the development set into the training or to obtain additional data from the same or a more similar domain. In this paper, we focus on the second approach.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 16,
                        "end": 23,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 268,
                        "end": 275,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "AmericasNLP datasets",
                "sec_num": "3.1"
            },
            {
                "text": "Sources of Quechua documents Even though Quechua is an official language in Peru, official government websites are not translated to Quechua or any other indigenous language, so it is not possible to perform web scrapping (Bustamante et al., 2020) . However, the Peruvian Government has published handbooks and lexicons for Quechua Ayacucho and Quechua Cusco, plus other educational resources to support language learning in indigenous communities. In addition, there are official documents such as the Political Constitution of Peru and the Regulation of the Amazon Parliament that are translated to the Quechua Cusco variant. We have found three unofficial sources to extract parallel corpora from Quechua Ayacucho (quy). The first one is a website, made by Maximiliano Duran (Duran, 2010) , that encourages the learning of Quechua Ayacucho. The site contains poems, stories, riddles, songs, phrases and a vocabulary for Quechua. The second one is a website for different lyrics of poems and songs which have available translations for both variants of Quechua (Lyrics translate, 2008). The third source is a Quechua handbook for the Quechua Ayacucho variant elaborated by Iter and C\u00e1rdenas (2019) .",
                "cite_spans": [
                    {
                        "start": 222,
                        "end": 247,
                        "text": "(Bustamante et al., 2020)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 778,
                        "end": 791,
                        "text": "(Duran, 2010)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1175,
                        "end": 1199,
                        "text": "Iter and C\u00e1rdenas (2019)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New parallel corpora",
                "sec_num": "3.2"
            },
            {
                "text": "Sources that were extracted but not used due to time constrains were the Political Constitution of Peru and the Regulation of the Amazon Parliament. Other non-extracted source is a dictionary for Quechua Ayacucho from a website called InkaTour 2 . This source was not used because we already had a dictionary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New parallel corpora",
                "sec_num": "3.2"
            },
            {
                "text": "Methodology for corpus creation The available vocabulary in Duran (2010) was extracted manually and transformed into parallel corpora using the first pair of parenthesis as separators. We will call this dataset \"Lexicon\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New parallel corpora",
                "sec_num": "3.2"
            },
            {
                "text": "All the additional sentences in Duran (2010) and a few poems from (Lyrics translate, 2008) were manually aligned to obtain the Web Miscellaneous (WebMisc) corpus. Likewise, translations from the Quechua educational handbook (Iter and C\u00e1rdenas, 2019) were manually aligned to obtain a parallel corpus (Handbook). 3 In the case of the official documents for Quechua Cusco, there was a specific format were the Spanish text was followed by the Quechua translation. After manually arranging the line breaks to separate each translation pair, we automatically constructed a parallel corpus for both documents. Paragraphs with more than 2 sentences that had the same number of sentences as their translation were split into small sentences and the unmatched paragraphs were deleted.",
                "cite_spans": [
                    {
                        "start": 224,
                        "end": 249,
                        "text": "(Iter and C\u00e1rdenas, 2019)",
                        "ref_id": null
                    },
                    {
                        "start": 312,
                        "end": 313,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New parallel corpora",
                "sec_num": "3.2"
            },
            {
                "text": "We perform a large number or rare events (LNRE) modelling to analyse the WebMisc, Lexicon and Handbook datasets 4 . The values are shown in Table 2 : Corpora description: S = #sentences in corpus; N = number of tokens; V = vocabulary size; V1 = number of tokens occurring once (hapax); V/N = vocabulary growth rate; V1/N = hapax growth rate; mean = word frequency mean",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 140,
                        "end": 147,
                        "text": "Table 2",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Corpora description",
                "sec_num": null
            },
            {
                "text": "We notice that the vocabulary and hapax growth rate is similar for Quechua (quy) in WebMisc and Handbook even though the latter has more than twice the number of sentences. In addition, it was expected that the word frequency mean and the vocabulary size were lower for Quechua, as this demonstrates its agglutinative property. However, this does not happens in the Lexicon dataset, since is understandable as it is a dictionary that has one or two words for the translation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpora description",
                "sec_num": null
            },
            {
                "text": "Moreover, there is a high presence of tokens occurring only once in both languages. In other words, there is a possibility that our datasets have spelling errors or presence of foreign words (Nagata et al., 2018) . However, in this case this could be more related to the vast vocabulary, as the datasets are made of sentences from different domains (poems, songs, teaching, among others).",
                "cite_spans": [
                    {
                        "start": 191,
                        "end": 212,
                        "text": "(Nagata et al., 2018)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpora description",
                "sec_num": null
            },
            {
                "text": "Furthermore, it is important to examine the similarities between the new datasets and the development set. The percentage of the development set words that overlap with the words of the new datasets on Spanish (es) and Quechua (quy) after eliminating all symbols is shown in Table 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 275,
                        "end": 282,
                        "text": "Table 3",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Corpora description",
                "sec_num": null
            },
            {
                "text": "% Dev overlapping es quy WebMisc 18.6% 4% Lexicon 20% 3.4% Handbook 28%",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": null
            },
            {
                "text": "10.6% Although at first glance the analysis may show that there is not a significant similarity with the development set, we have to take into account that in Table 1 , JW300 has 121k sentences and Dict_misc is a dictionary, so it is easy to overlap some of the development set words at least once.However , in the case of WebMisc and Handbook datasets, the quantity of sentences are less than 3k per dataset and even so the percentage of overlapping in Spanish is quite good. This result goes according to the contents of the datasets, as they contain common phrases and open domain sentences, which are the type of sentences that the development set has.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 159,
                        "end": 166,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": null
            },
            {
                "text": "For pre-training, we used the EuroParl dataset for Spanish-English (1.9M sentences) (Koehn, 2005) and its development corpora for evaluation.",
                "cite_spans": [
                    {
                        "start": 84,
                        "end": 97,
                        "text": "(Koehn, 2005)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "English-Spanish dataset",
                "sec_num": "3.3"
            },
            {
                "text": "From the Europarl dataset, we extracted 3,000 sentences for validation. For testing we used the devel-opment set from the WMT2006 campaign (Koehn and Monz, 2006) .",
                "cite_spans": [
                    {
                        "start": 139,
                        "end": 161,
                        "text": "(Koehn and Monz, 2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach used 4.1 Evaluation",
                "sec_num": "4"
            },
            {
                "text": "In the case of Quechua, as the official development set contains only 1,000 sentences there was no split for the testing. Hence, validation results will be taken into account as testing ones.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach used 4.1 Evaluation",
                "sec_num": "4"
            },
            {
                "text": "The main metric in this competition is chrF (Popovi\u0107, 2017) which evaluates character n-grams and is a useful metric for agglutinative languages such as Quechua. We also reported the BLEU scores (Papineni et al., 2002) . We used the implementations of sacreBLEU (Post, 2018) .",
                "cite_spans": [
                    {
                        "start": 44,
                        "end": 59,
                        "text": "(Popovi\u0107, 2017)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 195,
                        "end": 218,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 262,
                        "end": 274,
                        "text": "(Post, 2018)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach used 4.1 Evaluation",
                "sec_num": "4"
            },
            {
                "text": "Subword segmentation is a crucial process for the translation of polysinthetic languages such as Quechua. We used the Byte-Pair-Encoding (BPE; Sennrich et al., 2016) implementation in Sentence-Piece (Kudo and Richardson, 2018) with a vocabulary size of 32,000. To generate a richer vocabulary, we trained a segmentation model with all three languages (Spanish, English and Quechua), where we upsampled the Quechua data to reach a uniform distribution.",
                "cite_spans": [
                    {
                        "start": 199,
                        "end": 226,
                        "text": "(Kudo and Richardson, 2018)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subword segmentation",
                "sec_num": "4.2"
            },
            {
                "text": "For all experiments, we used a Transformer-based model (Vaswani et al., 2017) with default parameters from the Fairseq toolkit (Ott et al., 2019) . The criteria for early stopping was cross-entropy loss for 15 steps.",
                "cite_spans": [
                    {
                        "start": 55,
                        "end": 77,
                        "text": "(Vaswani et al., 2017)",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 127,
                        "end": 145,
                        "text": "(Ott et al., 2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Procedure",
                "sec_num": "4.3"
            },
            {
                "text": "We first pre-trained a Spanish-English model on the Europarl dataset in order to obtain a good encoding capability on the Spanish side. Using this pre-trained model, we implemented two different versions for fine-tunning. First, with the JW300 dataset, which was the largest Spanish-Quechua corpus, and the second one with all the available datasets (including the ones that we obtained) for Quechua.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Procedure",
                "sec_num": "4.3"
            },
            {
                "text": "The results from the transfer learning models and the baseline are shown in Table 4 . We observe that the best result on BLEU and chrF was obtained using the provided datasets together with the extracted datasets. This shows that the new corpora were helpful to improve translation performance.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 76,
                        "end": 83,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results and discussion",
                "sec_num": "5"
            },
            {
                "text": "From Table 4 , we observe that using transfer learning showed a considerable improvement in comparison with the baseline (+0.56 in BLEU and .007 in chrF). Moreover, using transfer learning with all the available datasets obtained the best BLEU and chrF score. Specially, it had a 0.012 increase in chrF which is quite important as chrF is the metric that best evaluates translation in this case. Overall, the results do not seem to be good in terms of BLEU. However, a manual analysis of the sentences shows that the model is learning to translate a considerable amount of affixes.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 5,
                        "end": 12,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results and discussion",
                "sec_num": "5"
            },
            {
                "text": "El control de armas probablemente no es popular en Texas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input (ES)",
                "sec_num": null
            },
            {
                "text": "Weapon control is probably not popular in Texas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input (EN)",
                "sec_num": null
            },
            {
                "text": "Texaspiqa sutillapas arma controlayqa manachusmi hinachu apakun Output",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reference (QUY)",
                "sec_num": null
            },
            {
                "text": "Texas llaqtapi armakuna controlayqa manam runakunapa runachu For instance, the subwords \"arma\", \"mana\", among others, have been correctly translated but are not grouped in the same words as in the reference. In addition, only the word \"controlayqa\" is translated correctly, which would explain the low results in BLEU. Decoding an agglutinative language is a very difficult task, and the low BLEU scores cannot suggest a translation with proper adequacy and/or fluency (as we can also observe this from the example). Nevertheless, BLEU works at word-level so other character-level metrics should be considered to inspect agglutinative languages. This would be the case of chrF (Popovi\u0107, 2017) were there is an increase of around 3% when using the AmericasNLP altogether with the new extracted corpora.",
                "cite_spans": [
                    {
                        "start": 677,
                        "end": 692,
                        "text": "(Popovi\u0107, 2017)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reference (QUY)",
                "sec_num": null
            },
            {
                "text": "Translations using the transfer learning model trained with all available Quechua datasets were submitted for track 2 (Development set not used for Training). For the submission of track 1 (Development set used for Training) we retrained the best transfer learning model adding the validation to the training for 40 epochs. The official results of the competition are shown in ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reference (QUY)",
                "sec_num": null
            },
            {
                "text": "In this paper, we focused on extracting new datasets for Spanish-Quechua, which helped to improve the performance of our model. Moreover, we found that using transfer learning was beneficial to the results even without the additional data. By combining the new corpora in the fine-tuning step, we managed to obtain the first place on Track 2 and the third place on Track 1 of the AmericasNLP Shared Task. Due to time constrains, the Quechua Cusco data was not used, but it can be beneficial for further work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "In general, we found that the translating Quechua is a challenging task for two reasons. Firstly, there is a lack of data for all the variants of Quechua, and the available documents are hard to extract. In this research, all the new datasets were extracted and aligned mostly manually. Secondly, the agglutinative nature of Quechua motivates more research about effective subword segmentation methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "https://www.inkatour.com/dico/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "All documents are published in: https://github.com/ Ceviche98/REPUcs-AmericasNLP20214 We used the LNRE calculator created by Kyle Gorman: https://gist.github.com/kylebgorman/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This work could not be possible without the support of REPU Computer Science (Research Experience for Peruvian Undergraduates), a program that connects Peruvian students with researchers across the world. The author is thankful to the REPU's directors and members, and in particular, to Fernando Alva-Manchego and David Freidenson, who were part of the early discussions for the participation in the Shared Task. Furthermore, the author is grateful to the insightful feedback of Arturo Oncevay, Barry Haddow and Alexandra Birch, from the University of Edinburgh, where the author worked as an intern as part of the REPU's 2021 cohort.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            },
            {
                "text": " Table 7 : Description of the corpora extracted, but not used, for Quechua Cusco (quz). S = #sentences in corpus; N = number of tokens; V = vocabulary size; V1 = number of tokens occurring once (hapax); V/N = vocabulary growth rate; V1/N = hapax growth rate; mean = word frequency mean",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1,
                        "end": 8,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "A Appendix",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "JW300: A widecoverage parallel corpus for low-resource languages",
                "authors": [
                    {
                        "first": "\u017deljko",
                        "middle": [],
                        "last": "Agi\u0107",
                        "suffix": ""
                    },
                    {
                        "first": "Ivan",
                        "middle": [],
                        "last": "Vuli\u0107",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "3204--3210",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P19-1310"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru",
                "authors": [
                    {
                        "first": "Gina",
                        "middle": [],
                        "last": "Bustamante",
                        "suffix": ""
                    },
                    {
                        "first": "Arturo",
                        "middle": [],
                        "last": "Oncevay",
                        "suffix": ""
                    },
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Zariquiey",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
                "volume": "",
                "issue": "",
                "pages": "2914--2923",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gina Bustamante, Arturo Oncevay, and Roberto Zariquiey. 2020. No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 2914-2923, Marseille, France. European Language Resources Association.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Lengua general de los Incas",
                "authors": [
                    {
                        "first": "Maximiliano",
                        "middle": [],
                        "last": "Duran",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "2021--2024",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maximiliano Duran. 2010. Lengua general de los In- cas. http://quechua-ayacucho.org/es/index_es.php. Accessed: 2021-03-15.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models",
                "authors": [
                    {
                        "first": "Abteen",
                        "middle": [],
                        "last": "Ebrahimi",
                        "suffix": ""
                    },
                    {
                        "first": "Manuel",
                        "middle": [],
                        "last": "Mager",
                        "suffix": ""
                    },
                    {
                        "first": "Arturo",
                        "middle": [],
                        "last": "Oncevay",
                        "suffix": ""
                    },
                    {
                        "first": "Vishrav",
                        "middle": [],
                        "last": "Chaudhary",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Chiruzzo",
                        "suffix": ""
                    },
                    {
                        "first": "Angela",
                        "middle": [],
                        "last": "Fan",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Ortega",
                        "suffix": ""
                    },
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Ramos",
                        "suffix": ""
                    },
                    {
                        "first": "Annette",
                        "middle": [],
                        "last": "Rios",
                        "suffix": ""
                    },
                    {
                        "first": "Ivan",
                        "middle": [],
                        "last": "Vladimir",
                        "suffix": ""
                    },
                    {
                        "first": "Gustavo",
                        "middle": [
                            "A"
                        ],
                        "last": "Gim\u00e9nez-Lugo",
                        "suffix": ""
                    },
                    {
                        "first": "Elisabeth",
                        "middle": [],
                        "last": "Mager",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Ngoc Thang Vu, and Katharina Kann. 2021",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's thesis",
                "authors": [
                    {
                        "first": "Diego",
                        "middle": [],
                        "last": "Huarcaya",
                        "suffix": ""
                    },
                    {
                        "first": "Taquiri",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diego Huarcaya Taquiri. 2020. Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's the- sis, Universidad Peruana Uni\u00f3n.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Trivial transfer learning for low-resource neural machine translation",
                "authors": [
                    {
                        "first": "Tom",
                        "middle": [],
                        "last": "Kocmi",
                        "suffix": ""
                    },
                    {
                        "first": "Ond\u0159ej",
                        "middle": [],
                        "last": "Bojar",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
                "volume": "",
                "issue": "",
                "pages": "244--252",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Bel- gium, Brussels. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Europarl: A parallel corpus for statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "MT summit",
                "volume": "5",
                "issue": "",
                "pages": "79--86",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Manual and automatic evaluation of machine translation between european languages",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Christof",
                        "middle": [],
                        "last": "Monz",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings on the Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "102--121",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Work- shop on Statistical Machine Translation, pages 102- 121, New York City. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
                "authors": [
                    {
                        "first": "Taku",
                        "middle": [],
                        "last": "Kudo",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Richardson",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
                "volume": "",
                "issue": "",
                "pages": "66--71",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/D18-2012"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Lyrics translate. https:// lyricstranslate",
                "authors": [],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "2021--2024",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lyrics translate. 2008. Lyrics translate. https:// lyricstranslate.com/. Accessed: 2021-03-15.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas",
                "authors": [
                    {
                        "first": "Manuel",
                        "middle": [],
                        "last": "Mager",
                        "suffix": ""
                    },
                    {
                        "first": "Arturo",
                        "middle": [],
                        "last": "Oncevay",
                        "suffix": ""
                    },
                    {
                        "first": "Abteen",
                        "middle": [],
                        "last": "Ebrahimi",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Ortega",
                        "suffix": ""
                    },
                    {
                        "first": "Annette",
                        "middle": [],
                        "last": "Rios",
                        "suffix": ""
                    },
                    {
                        "first": "Angela",
                        "middle": [],
                        "last": "Fan",
                        "suffix": ""
                    },
                    {
                        "first": "Ximena",
                        "middle": [],
                        "last": "Gutierrez-Vasques",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Chiruzzo",
                        "suffix": ""
                    },
                    {
                        "first": "Gustavo",
                        "middle": [],
                        "last": "Gim\u00e9nez-Lugo",
                        "suffix": ""
                    },
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Ramos",
                        "suffix": ""
                    },
                    {
                        "first": "Anna",
                        "middle": [],
                        "last": "Currey",
                        "suffix": ""
                    },
                    {
                        "first": "Vishrav",
                        "middle": [],
                        "last": "Chaudhary",
                        "suffix": ""
                    },
                    {
                        "first": "Ivan Vladimir Meza",
                        "middle": [],
                        "last": "Ruiz",
                        "suffix": ""
                    },
                    {
                        "first": "Rolando",
                        "middle": [],
                        "last": "Coto-Solano",
                        "suffix": ""
                    },
                    {
                        "first": "Alexis",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    },
                    {
                        "first": "Elisabeth",
                        "middle": [],
                        "last": "Mager",
                        "suffix": ""
                    },
                    {
                        "first": "Ngoc",
                        "middle": [
                            "Thang"
                        ],
                        "last": "Vu",
                        "suffix": ""
                    },
                    {
                        "first": "Graham",
                        "middle": [],
                        "last": "Neubig",
                        "suffix": ""
                    },
                    {
                        "first": "Katharina",
                        "middle": [],
                        "last": "Kann",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "Proceedings of theThe First Workshop on NLP for Indigenous Languages of the Americas, Online. Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Xi- mena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Gim\u00e9nez-Lugo, Ricardo Ramos, Anna Currey, Vishrav Chaudhary, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager, Ngoc Thang Vu, Graham Neubig, and Katha- rina Kann. 2021. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. In Proceed- ings of theThe First Workshop on NLP for Indige- nous Languages of the Americas, Online. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Exploring the Influence of Spelling Errors on Lexical Variation Measures",
                "authors": [
                    {
                        "first": "Ryo",
                        "middle": [],
                        "last": "Nagata",
                        "suffix": ""
                    },
                    {
                        "first": "Taisei",
                        "middle": [],
                        "last": "Sato",
                        "suffix": ""
                    },
                    {
                        "first": "Hiroya",
                        "middle": [],
                        "last": "Takamura",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 27th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "2391--2398",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ryo Nagata, Taisei Sato, and Hiroya Takamura. 2018. Exploring the Influence of Spelling Errors on Lex- ical Variation Measures. Proceedings of the 27th International Conference on Computational Linguis- tics, (2012):2391-2398.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Overcoming resistance: The normalization of an Amazonian tribal language",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Ortega",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [
                            "Alexander"
                        ],
                        "last": "Castro-Mamani",
                        "suffix": ""
                    },
                    {
                        "first": "Jaime Rafael Montoya",
                        "middle": [],
                        "last": "Samame",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages",
                "volume": "",
                "issue": "",
                "pages": "1--13",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020a. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Using morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource translation",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Ortega",
                        "suffix": ""
                    },
                    {
                        "first": "Krishnan",
                        "middle": [],
                        "last": "Pillaipakkamnatt",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages",
                "volume": "",
                "issue": "",
                "pages": "1--11",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Ortega and Krishnan Pillaipakkamnatt. 2018. Us- ing morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource trans- lation. In Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages (LoResMT 2018), pages 1-11.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Neural machine translation with a polysynthetic low resource language",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "John",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Ortega",
                        "suffix": ""
                    },
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Castro Mamani",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Machine Translation",
                "volume": "34",
                "issue": "4",
                "pages": "325--346",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John E Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020b. Neural machine trans- lation with a polysynthetic low resource language. Machine Translation, 34(4):325-346.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "fairseq: A fast, extensible toolkit for sequence modeling",
                "authors": [
                    {
                        "first": "Myle",
                        "middle": [],
                        "last": "Ott",
                        "suffix": ""
                    },
                    {
                        "first": "Sergey",
                        "middle": [],
                        "last": "Edunov",
                        "suffix": ""
                    },
                    {
                        "first": "Alexei",
                        "middle": [],
                        "last": "Baevski",
                        "suffix": ""
                    },
                    {
                        "first": "Angela",
                        "middle": [],
                        "last": "Fan",
                        "suffix": ""
                    },
                    {
                        "first": "Sam",
                        "middle": [],
                        "last": "Gross",
                        "suffix": ""
                    },
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Grangier",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Auli",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
                "volume": "",
                "issue": "",
                "pages": "48--53",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N19-4009"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Bleu: a method for automatic evaluation of machine translation",
                "authors": [
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "Salim",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "Todd",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Jing",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {
                    "DOI": [
                        "10.3115/1073083.1073135"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "chrF++: words helping character n-grams",
                "authors": [
                    {
                        "first": "Maja",
                        "middle": [],
                        "last": "Popovi\u0107",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Second Conference on Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "612--618",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/W17-4770"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Maja Popovi\u0107. 2017. chrF++: words helping charac- ter n-grams. In Proceedings of the Second Con- ference on Machine Translation, pages 612-618, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A call for clarity in reporting BLEU scores",
                "authors": [
                    {
                        "first": "Matt",
                        "middle": [],
                        "last": "Post",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
                "volume": "",
                "issue": "",
                "pages": "186--191",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/W18-6319"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Parallel Treebanking Spanish-Quechua: how and how well do they align? Linguistic Issues in Language",
                "authors": [
                    {
                        "first": "Annette",
                        "middle": [],
                        "last": "Rios",
                        "suffix": ""
                    },
                    {
                        "first": "Anne",
                        "middle": [],
                        "last": "G\u00f6hring",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Volk",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Technology",
                "volume": "7",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Annette Rios, Anne G\u00f6hring, and Martin Volk. 2012. Parallel Treebanking Spanish-Quechua: how and how well do they align? Linguistic Issues in Lan- guage Technology, 7(1).",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Neural machine translation of rare words with subword units",
                "authors": [
                    {
                        "first": "Rico",
                        "middle": [],
                        "last": "Sennrich",
                        "suffix": ""
                    },
                    {
                        "first": "Barry",
                        "middle": [],
                        "last": "Haddow",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1715--1725",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P16-1162"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Ethnologue: Languages of the World. Twentysecond edition. Dallas Texas: SIL international. Online version",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Gary",
                        "suffix": ""
                    },
                    {
                        "first": "Charles",
                        "middle": [
                            "D"
                        ],
                        "last": "Simons",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Fenning",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gary F. Simons and Charles D. Fenning, editors. 2019. Ethnologue: Languages of the World. Twenty- second edition. Dallas Texas: SIL international. On- line version: http://www.ethnologue.com.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Parallel data, tools and interfaces in opus",
                "authors": [
                    {
                        "first": "J\u00f6rg",
                        "middle": [],
                        "last": "Tiedemann",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Attention is all you need",
                "authors": [
                    {
                        "first": "Ashish",
                        "middle": [],
                        "last": "Vaswani",
                        "suffix": ""
                    },
                    {
                        "first": "Noam",
                        "middle": [],
                        "last": "Shazeer",
                        "suffix": ""
                    },
                    {
                        "first": "Niki",
                        "middle": [],
                        "last": "Parmar",
                        "suffix": ""
                    },
                    {
                        "first": "Jakob",
                        "middle": [],
                        "last": "Uszkoreit",
                        "suffix": ""
                    },
                    {
                        "first": "Llion",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Aidan",
                        "middle": [
                            "N"
                        ],
                        "last": "Gomez",
                        "suffix": ""
                    },
                    {
                        "first": "Illia",
                        "middle": [],
                        "last": "Kaiser",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Polosukhin",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "30",
                "issue": "",
                "pages": "5998--6008",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el Per\u00fa: hacia un estado de la cuesti\u00f3n",
                "authors": [
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Zariquiey",
                        "suffix": ""
                    },
                    {
                        "first": "Harald",
                        "middle": [],
                        "last": "Hammarstr\u00f6m",
                        "suffix": ""
                    },
                    {
                        "first": "M\u00f3nica",
                        "middle": [],
                        "last": "Arakaki",
                        "suffix": ""
                    },
                    {
                        "first": "Arturo",
                        "middle": [],
                        "last": "Oncevay",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Lexis",
                "volume": "43",
                "issue": "2",
                "pages": "271--337",
                "other_ids": {
                    "DOI": [
                        "10.18800/lexis.201902.001"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Roberto Zariquiey, Harald Hammarstr\u00f6m, M\u00f3nica Arakaki, Arturo Oncevay, John Miller, Aracelli Gar- c\u00eda, and Adriano Ingunza. 2019. Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el Per\u00fa: hacia un estado de la cuesti\u00f3n. Lexis, 43(2):271-337.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Transfer learning for low-resource neural machine translation",
                "authors": [
                    {
                        "first": "Barret",
                        "middle": [],
                        "last": "Zoph",
                        "suffix": ""
                    },
                    {
                        "first": "Deniz",
                        "middle": [],
                        "last": "Yuret",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathan",
                        "middle": [],
                        "last": "May",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "1568--1575",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/d16-1163"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. EMNLP 2016 -Con- ference on Empirical Methods in Natural Language Processing, Proceedings, pages 1568-1575.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF0": {
                "type_str": "table",
                "content": "<table><tr><td/><td>WebMisc</td><td>Lexicon</td><td>Handbook</td></tr><tr><td/><td>es quy</td><td>es quy</td><td>es quy</td></tr><tr><td>S</td><td>985</td><td>6161</td><td>2297</td></tr><tr><td>N</td><td colspan=\"3\">5002 2996 7050 6288 15537 8522</td></tr><tr><td>V</td><td colspan=\"3\">1929 2089 3962 3361 4137 5604</td></tr><tr><td>V1</td><td colspan=\"3\">1358 1673 2460 1838 2576 4645</td></tr><tr><td colspan=\"4\">V/N 0.38 0.69 0.56 0.53 0.26 0.65</td></tr><tr><td colspan=\"4\">V1/N 0.27 0.55 0.34 0.29 0.16 0.54</td></tr><tr><td colspan=\"4\">mean 2.59 1.43 1.77 1.87 3.75 1.52</td></tr></table>",
                "text": "The LNRE modelling for the Quechua Cusco datasets are shown in appendix as they are not used for the final submission.",
                "num": null,
                "html": null
            },
            "TABREF1": {
                "type_str": "table",
                "content": "<table/>",
                "text": "Percentage of word overlapping between the development and the new extracted datasets",
                "num": null,
                "html": null
            },
            "TABREF3": {
                "type_str": "table",
                "content": "<table/>",
                "text": "Results of transfer learning experiments +0",
                "num": null,
                "html": null
            },
            "TABREF4": {
                "type_str": "table",
                "content": "<table/>",
                "text": "Subword analysis on translated and reference sentence",
                "num": null,
                "html": null
            },
            "TABREF5": {
                "type_str": "table",
                "content": "<table><tr><td/><td>Rank</td><td>Team</td><td colspan=\"2\">BLEU chrF</td></tr><tr><td>Track 1</td><td>1 3</td><td>Helsinki REPUcs</td><td>5.38 3.1</td><td>0.394 0.358</td></tr><tr><td>Track 2</td><td>1 2</td><td>REPUcs Helsinki</td><td>2.91 3.63</td><td>0.346 0.343</td></tr></table>",
                "text": "",
                "num": null,
                "html": null
            },
            "TABREF6": {
                "type_str": "table",
                "content": "<table/>",
                "text": "Official results from AmericasNLP 2021 Shared Task competition on the two tracks.Track 1: Development set used for Training, Track 2: Development set not used for Training",
                "num": null,
                "html": null
            }
        }
    }
}