File size: 88,834 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
{
    "paper_id": "I08-1001",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:42:39.147348Z"
    },
    "title": "A Lemmatization Method for Modern Mongolian and its Application to Information Retrieval",
    "authors": [
        {
            "first": "Badam-Osor",
            "middle": [],
            "last": "Khaltar",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Tsukuba",
                "location": {
                    "addrLine": "1-2 Kasuga Tsukuba",
                    "postCode": "305-8550",
                    "country": "Japan"
                }
            },
            "email": ""
        },
        {
            "first": "Atsushi",
            "middle": [],
            "last": "Fujii",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Tsukuba",
                "location": {
                    "addrLine": "1-2 Kasuga Tsukuba",
                    "postCode": "305-8550",
                    "country": "Japan"
                }
            },
            "email": "fujii@slis.tsukuba.ac.jp"
        },
        {
            "first": "\u0414\u043e\u0440\u0436",
            "middle": [],
            "last": "\u0414\u0430\u043d\u0434\u0438\u0439",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Tsukuba",
                "location": {
                    "addrLine": "1-2 Kasuga Tsukuba",
                    "postCode": "305-8550",
                    "country": "Japan"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In Modern Mongolian, a content word can be inflected when concatenated with suffixes. Identifying the original forms of content words is crucial for natural language processing and information retrieval. We propose a lemmatization method for Modern Mongolian and apply our method to indexing for information retrieval. We use technical abstracts to show the effectiveness of our method experimentally.",
    "pdf_parse": {
        "paper_id": "I08-1001",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In Modern Mongolian, a content word can be inflected when concatenated with suffixes. Identifying the original forms of content words is crucial for natural language processing and information retrieval. We propose a lemmatization method for Modern Mongolian and apply our method to indexing for information retrieval. We use technical abstracts to show the effectiveness of our method experimentally.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The Mongolian language is divided into Traditional Mongolian, which uses the Mongolian alphabet, and Modern Mongolian, which uses the Cyrillic alphabet. In this paper, we focus solely on the latter and use the word \"Mongolian\" to refer to Modern Mongolian.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In Mongolian, which is an agglutinative language, each sentence is segmented on a phrase-byphrase basis. A phrase consists of a content word, such as a noun or a verb, and one or more suffixes, such as postpositional participles. A content word can potentially be inflected when concatenated with suffixes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Identifying the original forms of content words in Mongolian text is crucial for natural language processing and information retrieval. In information retrieval, the process of normalizing index terms is important, and can be divided into lemmatization and stemming. Lemmatization identifies the original form of an inflected word, whereas stemming identifies a stem, which is not necessarily a word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Existing search engines, such as Google and Yahoo!, do not perform lemmatization or stemming for indexing Web pages in Mongolian. Therefore, Web pages that include only inflected forms of a query cannot be retrieved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we propose a lemmatization method for Mongolian and apply our method to indexing for information retrieval.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Nouns, adjectives, numerals, and verbs can be concatenated with suffixes. Nouns and adjectives are usually concatenated with a sequence of a plural suffix, case suffix, and reflexive possessive suffix. Numerals are concatenated with either a case suffix or a reflexive possessive suffix. Verbs are concatenated with various suffixes, such as an aspect suffix, a participle suffix, and a mood suffix. Figure 1 shows the inflection types of content words in Mongolian phrases. In (a), there is no inflection in the content word \"\u043d\u043e\u043c (book)\", concatenated with the suffix \"\u044b\u043d (the genitive case)\". The content words are inflected in (b)-(e). Loanwords, which can be nouns, adjectives, or verbs in Mongolian, can also be concatenated with suffixes. In this paper, we define a loanword as a word imported from a Western language.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 400,
                        "end": 408,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Inflection types in Mongolian phrases",
                "sec_num": "2"
            },
            {
                "text": "Because loanwords are linguistically different from conventional Mongolian words, the suffix concatenation is also different from that for conventional Mongolian words. Thus, exception rules are required for loanwords.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inflection types in Mongolian phrases",
                "sec_num": "2"
            },
            {
                "text": "For example, if the loanword \"\u0441\u0442\u0430\u043d\u0446 (station)\" is to be concatenated with a genitive case suffix, \"\u044b\u043d\" should be selected from the five genitive case suffixes (i.e., \u044b\u043d, \u0438\u0439\u043d, \u044b, \u0438\u0439, and \u043d) based on the Mongolian grammar. However, because \"\u0441\u0442\u0430\u043d\u0446 (station)\" is a loanword, the genitive case \"\u0438\u0439\u043d\" is selected instead of \"\u044b\u043d\", resulting in the noun phrase \"\u0441\u0442\u0430\u043d\u0446\u0438\u0439\u043d (station's)\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inflection types in Mongolian phrases",
                "sec_num": "2"
            },
            {
                "text": "Additionally, the inflection (e) in Figure 1 never occurs for noun and adjective loanwords. Sanduijav et al. (2005) proposed a lemmatization method for noun and verb phrases in Mongolian. They manually produced inflection rules and concatenation rules for nouns and verbs. Then, they automatically produced a dictionary by aligning nouns or verbs with suffixes. Lemmatization for phrases is performed by consulting this dictionary. Ehara et al. (2004) proposed a morphological analysis method for Mongolian, for which they manually produced rules for inflections and concatenations. However, because the lemmatization methods proposed by Sanduijav et al. (2005) and Ehara et al. (2004) rely on dictionaries, these methods cannot lemmatize new words that are not in dictionaries, such as loanwords and technical terms. Khaltar et al. (2006) proposed a lemmatization method for Mongolian noun phrases that does not use a noun dictionary. Their method can be used for nouns, adjectives, and numerals, because the suffixes that are concatenated with these are almost the same and the inflection types are also the same. However, they were not aware of the applicability of their method to adjectives and numerals.",
                "cite_spans": [
                    {
                        "start": 92,
                        "end": 115,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 432,
                        "end": 451,
                        "text": "Ehara et al. (2004)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 638,
                        "end": 661,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 666,
                        "end": 685,
                        "text": "Ehara et al. (2004)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 818,
                        "end": 839,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 36,
                        "end": 44,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Inflection types in Mongolian phrases",
                "sec_num": "2"
            },
            {
                "text": "The method proposed by Khaltar et al. (2006) mistakenly extracts loanwords with endings that are different from conventional Mongolian words. For example, if the phrase \"\u044d\u043a\u043e\u043b\u043e\u0433\u0438\u0439\u043d (ecology's)\" is lemmatized, the resulting content word will be \"\u044d\u043a\u043e\u043b\u043e\u0433\", which is incorrect. The correct word is \"\u044d\u043a\u043e\u043b\u043e\u0433\u0438 (ecology)\". This error occurs because the ending \"-\u043e\u043b\u043e\u0433\u0438 (-ology)\" does not appear in conventional Mongolian words.",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 44,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "3"
            },
            {
                "text": "In addition, Khaltar et al. (2006) 's method applies (e) in Figure 1 to loanwords, whereas inflection (e) never occurs in noun and adjective loanwords.",
                "cite_spans": [
                    {
                        "start": 13,
                        "end": 34,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 60,
                        "end": 68,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "3"
            },
            {
                "text": "Lemmatization and stemming are arguably effective for indexing in information retrieval (Hull, 1996; Porter, 1980) . Stemmers have been developed for a number of agglutinative languages, including Malay (Tai et al., 2000) , Indonesian (Berlian Vega and Bressan, 2001), Finnish (Korenius et al., 2004) , Arabic (Larkey et al., 2002) , Swedish (Carlberger et al., 2001 ), Slovene (Popovi\u010d and Willett, 1992) and Turkish (Ekmek\u00e7ioglu et al., 1996) . Xu and Croft (1998) and Melucci and Orio (2003) independently proposed a languageindependent method for stemming, which analyzes a corpus in a target language and identifies an equivalent class consisting of an original form, inflected forms, and derivations. However, their method, which cannot identify the original form in each class, cannot be used for natural language applications where word occurrences must be standardized by their original forms.",
                "cite_spans": [
                    {
                        "start": 88,
                        "end": 100,
                        "text": "(Hull, 1996;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 101,
                        "end": 114,
                        "text": "Porter, 1980)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 197,
                        "end": 221,
                        "text": "Malay (Tai et al., 2000)",
                        "ref_id": null
                    },
                    {
                        "start": 277,
                        "end": 300,
                        "text": "(Korenius et al., 2004)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 310,
                        "end": 331,
                        "text": "(Larkey et al., 2002)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 342,
                        "end": 366,
                        "text": "(Carlberger et al., 2001",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 378,
                        "end": 405,
                        "text": "(Popovi\u010d and Willett, 1992)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 410,
                        "end": 444,
                        "text": "Turkish (Ekmek\u00e7ioglu et al., 1996)",
                        "ref_id": null
                    },
                    {
                        "start": 447,
                        "end": 466,
                        "text": "Xu and Croft (1998)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 471,
                        "end": 494,
                        "text": "Melucci and Orio (2003)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "3"
            },
            {
                "text": "Finite State Transducers (FSTs) have been applied to lemmatization. Although Karttunen and Beesley (2003) suggested the applicability of FSTs to various languages, no rule has actually been proposed for Mongolian. The rules proposed in this paper can potentially be used for FSTs.",
                "cite_spans": [
                    {
                        "start": 77,
                        "end": 105,
                        "text": "Karttunen and Beesley (2003)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "3"
            },
            {
                "text": "To the best of our knowledge, no attempt has been made to apply lemmatization or stemming to information retrieval for Mongolian. Our research is the first serious effort to address this problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "3"
            },
            {
                "text": "In view of the discussion in Section 3, we enhanced the lemmatization method proposed by Khaltar et al. (2006) . The strength of this method is that noun dictionaries are not required. Figure 2 shows the overview of our lemmatization method for Mongolian. Our method consists of two segments, which are identified with dashed lines in Figure 2 : \"lemmatization for verb phrases\" and \"lemmatization for noun phrases\".",
                "cite_spans": [
                    {
                        "start": 89,
                        "end": 110,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 185,
                        "end": 193,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 335,
                        "end": 343,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "In Figure 2 , we enhanced the method proposed by Khaltar et al. (2006) from three perspectives.",
                "cite_spans": [
                    {
                        "start": 49,
                        "end": 70,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 11,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "First, we introduced \"lemmatization for verb phrases\". There is a problem to be solved when we target both noun and verb phrases. There are a number of suffixes that can concatenate with both verbs and nouns, but the inflection type can be different depending on the part of speech. As a result, verb phrases can incorrectly be lemmatized as noun phrases and vice versa.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "Because new verbs are not created as frequently as nouns, we predefine a verb dictionary, but do not use a noun dictionary. We first lemmatize an entered phrase as a verb phrase and then check whether the extracted content word is defined in our verb dictionary. If the content word is not defined in our verb dictionary, we lemmatize the input phrase as a noun phrase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "Second, we introduced a \"loanword identification rule\" in \"lemmatization for noun phrases\". We identify a loanword phrase before applying a \"noun suffix segmentation rule\" and \"vowel insertion rule\". Because segmentation rules are different for conventional Mongolian words and loanwords, we enhance the noun suffix segmentation rule that was originally proposed by Khaltar et al. (2006) . Additionally, we do not use the vowel insertion rule, if the entered phrase is detected as a loanword phrase. The reason is that vowel elimination never occurs in noun loanwords. Third, unlike Khaltar et al. 2006, we targeted adjective and numeral phrases. Because the suffixes concatenated with nouns, adjectives, and numerals are almost the same, the lemmatization method for noun phrases can also be used for adjective and numeral phrases without any modifications. We use \"lemmatization for noun phrases\" to refer to the lemmatization for noun, adjective, and numeral phrases.",
                "cite_spans": [
                    {
                        "start": 366,
                        "end": 387,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We briefly explain our lemmatization process using Figure 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 51,
                        "end": 59,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We consult a \"verb suffix dictionary\" and perform backward partial matching to determine whether a suffix is concatenated at the end of a phrase. If a suffix is detected, we use a \"verb suffix segmentation rule\" to remove the suffix and extract the content word. This process will be repeated until the residue of the phrase does not match any of the entries in the verb suffix dictionary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We use a \"vowel insertion rule\" to check whether vowel elimination occurred in the content word and insert the eliminated vowel.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "If the content word is defined in a \"verb dictionary\", we output the content word as a verb and terminate the lemmatization process. If not, we use the entered phrase and perform lemmatization for noun phrases. We consult a \"noun suffix dictionary\" to determine whether one or more suffixes are concatenated at the end of the target phrase. We use a \"loanword identification rule\" to identify whether the phrase is a loanword phrase. We use a \"noun suffix segmentation rule\" to remove the suffixes and extract the content word. If the phrase is identified as a loanword phrase we use different segmentation rules.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We use the \"vowel insertion rule\" which is also used for verb phrases to check whether vowel elimination occurred in the content word and insert the eliminated vowel. However, if the phrase is identified as a loanword phrase, we do not use the vowel insertion rule.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "If the target phrase does not match any of the entries in the noun suffix dictionary, we determine that a suffix is not concatenated and we output the phrase as it is.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "The inflection types (b)-(d) in Figure 1 are processed by the verb suffix segmentation rule and noun suffix segmentation rule. The inflection (e) in Figure 1 is processed by the vowel insertion rule.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 32,
                        "end": 40,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 149,
                        "end": 157,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We elaborate on the dictionaries and rules in Sections 4.2-4.8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "We produced a verb suffix dictionary, which consists of 126 suffixes that can concatenate with verbs. These suffixes include aspect suffixes, participle suffixes, and mood suffixes. Figure 3 shows a fragment of our verb suffix dictionary, in which inflected forms of suffixes are shown in parentheses. All suffixes corresponding to the same suffix type represent the same meaning.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 182,
                        "end": 190,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Verb suffix dictionary",
                "sec_num": "4.2"
            },
            {
                "text": "For the verb suffix segmentation rule, we produced 179 rules. There are one or more segmentation rules for each of the 126 verb suffixes mentioned in Section 4.2. Figure 4 shows a fragment of the verb suffix segmentation rule for suffix \"\u0432 (past)\". In the column \"Segmentation rule\", the condition of each \"if\" sentence is a phrase ending. \"V\" refers to a vowel and \"*\" refers to any strings. \"C9\" refers to any of the nine consonants \"\u0446\", \"\u0436\", \"\u0437\", \"\u0441\", \"\u0434\", \"\u0442\", \"\u0448\", \"\u0447\", or \"\u0445\", and \"C7\" refers to any of the seven consonants \"\u043c\", \"\u0433\", \"\u043d\", \"\u043b\", \"\u0431\", \"\u0432\", or \"\u0440\". If a condition is satisfied, we remove one or more corresponding characters.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 163,
                        "end": 171,
                        "text": "Figure 4",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Verb suffix segmentation rule",
                "sec_num": "4.3"
            },
            {
                "text": "For example, because the verb phrase \"\u0448\u0438\u043d\u044d\u0447\u043b\u044d\u0432 (renew + past)\" satisfies condition (ii), we remove the suffix \"\u0432\" and the preceding vowel \"\u044d\" to extract \"\u0448\u0438\u043d\u044d\u0447\u043b\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Verb suffix segmentation rule",
                "sec_num": "4.3"
            },
            {
                "text": "We use the verb dictionary produced by Sanduijav et al. (2005) , which includes 1254 verbs.",
                "cite_spans": [
                    {
                        "start": 39,
                        "end": 62,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Verb dictionary",
                "sec_num": "4.4"
            },
            {
                "text": "We use the noun suffix dictionary produced by Khaltar et al. (2006) , which contains 35 suffixes that can be concatenated with nouns. These suffixes are postpositional particles. Figure 5 shows a fragment of the dictionary, in which inflected forms of suffixes are shown in parentheses.",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 67,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 179,
                        "end": 187,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Noun suffix dictionary",
                "sec_num": "4.5"
            },
            {
                "text": "There are 196 noun suffix segmentation rules, of which 173 were proposed by Khaltar et al. (2006) . As we explained in Section 3, these 173 rules often incorrectly lemmatize loanwords with different endings from conventional Mongolian words.",
                "cite_spans": [
                    {
                        "start": 76,
                        "end": 97,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Noun suffix segmentation rule",
                "sec_num": "4.6"
            },
            {
                "text": "We analyzed the list of English suffixes and found that English suffixes \"-ation\" and \"-ology\" are incorrectly lemmatized by Khaltar et al. (2006) . In Mongolian, \"-ation\" is transliterated into \"\u0430\u0446\u0438\" or \"\u044f\u0446\u0438\" and \"-ology\" is transliterated into \"\u043e\u043b\u043e\u0433\u0438\". Thus, we produced 23 rules for loanwords that end with \"\u0430\u0446\u0438\", \"\u044f\u0446\u0438\", or \"\u043e\u043b\u043e\u0433\u0438\". Figure 6 shows a fragment of our suffix segmentation rule for loanwords. For example, for the loanword phrase \"\u044d\u043a\u043e\u043b\u043e\u0433\u0438\u0439\u043d (ecology + genitive)\", we use the segmentation rule for suffix \"\u0438\u0439\u043d (genitive)\" in Figure 6 . We remove the suffix \"\u0438\u0439\u043d (genitive)\" and add \"\u0438\" to the end of the content word. As a result, the noun \"\u044d\u043a\u043e\u043b\u043e\u0433\u0438 (ecology)\" is correctly extracted. ",
                "cite_spans": [
                    {
                        "start": 125,
                        "end": 146,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 336,
                        "end": 344,
                        "text": "Figure 6",
                        "ref_id": "FIGREF4"
                    },
                    {
                        "start": 540,
                        "end": 548,
                        "text": "Figure 6",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Noun suffix segmentation rule",
                "sec_num": "4.6"
            },
            {
                "text": "To insert an eliminated vowel and extract the original form of a content word, we check the last two characters of the content word. If they are both consonants, we determine that a vowel was eliminated. However, a number of Mongolian words end with two consonants inherently and, therefore, Khaltar et al. (2006) referred to a textbook on the Mongolian grammar (Ts, 2002) to produce 12 rules to determine when to insert a vowel between two consecutive consonants. We also use these rules as our vowel insertion rule. Khaltar et al. (2006) proposed rules for extracting loanwords from Mongolian corpora. Words that satisfy one of seven conditions are extracted as loanwords. Of the seven conditions, we do not use the condition that extracts a word ending with \"consonants + \u0438\" as a loanword because it was not effective for lemmatization purposes in preliminary study.",
                "cite_spans": [
                    {
                        "start": 292,
                        "end": 313,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 362,
                        "end": 372,
                        "text": "(Ts, 2002)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 518,
                        "end": 539,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Vowel insertion rule",
                "sec_num": "4.7"
            },
            {
                "text": "We collected 1102 technical abstracts from the \"Mongolian IT Park\" 1 and used them for experiments. There were 178,448 phrase tokens and 17,709 phrase types in the 1102 technical abstracts. We evaluated the accuracy of our lemmatization method (Section 5.2) and the effectiveness of our method in information retrieval (Section 5.3) experimentally.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation method",
                "sec_num": "5.1"
            },
            {
                "text": "1 http://www.itpark.mn/ (October, 2007) ",
                "cite_spans": [
                    {
                        "start": 24,
                        "end": 39,
                        "text": "(October, 2007)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation method",
                "sec_num": "5.1"
            },
            {
                "text": "Two Mongolian graduate students served as assessors. Neither of the assessors was an author of this paper. The assessors provided the correct answers for lemmatization. The assessors also tagged each word with its part of speech.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "The two assessors performed the same task independently. Differences can occur between two assessors on this task. We measured the agreement of the two assessors by the Kappa coefficient, which ranges from 0 to 1. The Kappa coefficients for performing lemmatization and tagging of parts of speech were 0.96 and 0.94, respectively, which represents almost perfect agreement (Landis and Koch, 1977) . However, to enhance the objectivity of the evaluation, we used only the phrases for which the two assessors agreed with respect to the part of speech and lemmatization.",
                "cite_spans": [
                    {
                        "start": 373,
                        "end": 396,
                        "text": "(Landis and Koch, 1977)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "We were able to use the noun and verb dictionaries of Sanduijav et al. (2005) . Therefore, we compared our lemmatization method with Sanduijav et al. (2005) and Khaltar et al. (2006) in terms of accuracy.",
                "cite_spans": [
                    {
                        "start": 54,
                        "end": 77,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 133,
                        "end": 156,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 161,
                        "end": 182,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "Accuracy is the ratio of the number of phrases correctly lemmatized by the method under evaluation to the total number of target phrases. Here, the target phrases are noun, verb, adjective, and numeral phrases. Table 1 shows the results of lemmatization. We targeted 15,478 phrase types in the technical abstracts. Our experiment is the largest evaluation for Mongolian lemmatization in the literature. In contrast, Sanduijav et al. (2005) and Khaltar et al. (2006) used only 680 and 1167 phrase types, respectively, for evaluation purposes.",
                "cite_spans": [
                    {
                        "start": 416,
                        "end": 439,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 444,
                        "end": 465,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 211,
                        "end": 218,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "In Table 1 , the accuracy of our method for nouns, which were targeted in all three methods, was higher than those of Sanduijav et al. (2005) and Khaltar et al. (2006) . Because our method and that of Sanduijav et al. (2005) used the same verb dictionary, the accuracy for verbs is principally the same for both methods. The accuracy for verbs was low, because a number of verbs were not included in the verb dictionary and were mistakenly lemmatized as noun phrases. However, this problem will be solved by enhancing the verb dictionary in the future. In total, the accuracy of our method was higher than those of Sanduijav et al. (2005) and Khaltar et al. (2006) . We analyzed the errors caused by our method in Figure 7 . In the column \"Example\", the left side and the right side of an arrow denote an error and the correct answer, respectively.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 141,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 146,
                        "end": 167,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 201,
                        "end": 224,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 615,
                        "end": 638,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 643,
                        "end": 664,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    },
                    {
                        "start": 714,
                        "end": 722,
                        "text": "Figure 7",
                        "ref_id": "FIGREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "The error (a) occurred to nouns, adjectives, and numerals, in which the ending of a content word was mistakenly recognized as a suffix and was removed. The error (b) occurred because we did not consider irregular nouns. The error (c) occurred to loanword nouns because the loanword identification rule was not sufficient. The error (d) occurred because we relied on a verb dictionary. The error (e) occurred because a number of nouns were incorrectly lemmatized as verbs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "For the errors (a)-(c), we have not found solutions. The error (d) can be solved by enhancing the verb dictionary in the future. If we are able to use part of speech information, we can solve the error (e). There are a number of automatic methods for tagging parts of speech (Brill, 1997) , which have promise for alleviating the error (e).",
                "cite_spans": [
                    {
                        "start": 275,
                        "end": 288,
                        "text": "(Brill, 1997)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating lemmatization",
                "sec_num": "5.2"
            },
            {
                "text": "We evaluated the effectiveness of lemmatization methods in indexing for information retrieval. No test collection for Mongolian information retrieval is available to the public. We used the 1102 technical abstracts to produce our test collection. Figure 8 shows an example technical abstract, in which the title is \"Advanced Albumin Fusion Technology\" in English. Each technical abstract contains one or more keywords. In Figure 8 , keywords, such as \"\u0446\u0443\u0441\u043d\u044b \u0438\u0439\u043b\u0434\u044d\u0441 (blood serum)\" and \"\u044d\u0445\u044d\u0441 (placenta)\" are annotated.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 247,
                        "end": 255,
                        "text": "Figure 8",
                        "ref_id": "FIGREF6"
                    },
                    {
                        "start": 422,
                        "end": 430,
                        "text": "Figure 8",
                        "ref_id": "FIGREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "We used two different types of queries for our evaluation. First, we used each keyword as a query, which we call \"keyword query (KQ)\". Second, we used each keyword list as a query, which we call \"list query (LQ)\". The average number for keywords in the keywords list was 6.1. For each query, we used as the relevant documents the abstracts that were annotated with the query keyword in the keywords field. Thus, we were able to avoid the cost of relevance judgments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "The target documents are the 1102 technical abstracts, from which we extracted content words in the title, abstract, and result fields as index terms. However, we did not use the keywords field for indexing purposes. We used Okapi BM25 (Robertson et al., 1995) as the retrieval model.",
                "cite_spans": [
                    {
                        "start": 236,
                        "end": 260,
                        "text": "(Robertson et al., 1995)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "We used the lemmatization methods in Table 2 to extract content words and compared the Mean Average Precision (MAP) of each method using KQ and LQ. MAP has commonly been used to evaluate the effectiveness of information retrieval. Because there were many queries for which the average precision was zero in all methods, we discarded those queries. There were 686 remaining KQs and 273 remaining LQs.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 37,
                        "end": 44,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "The average number of relevant documents for each query was 2.1. Although this number is small, the number of queries is large. Therefore, our evaluation result can be stable, as in evaluations for question answering (Voorhees and Tice, 2000) .",
                "cite_spans": [
                    {
                        "start": 217,
                        "end": 242,
                        "text": "(Voorhees and Tice, 2000)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "We can derive the following points from Table 2 . First, to clarify the effectiveness of the lemmatization in information retrieval, we compare \"no lemmatization\" with the other methods. Any lemmatization method improved the MAP for both KQ and LQ. Thus, lemmatization was effective for information retrieval in Mongolian. Second, we compare the MAP of our method with those of Sanduijav et al. (2005) and Khaltar et al. (2006) . Our method was more effective than the method of Sanduijav et al. (2005) for both KQ and LQ. However, the difference between Khaltar et al. (2006) and our method was small for KQ and our method was less effective than Khaltar et al.(2006) for LQ. This is because although we enhanced the lemmatization for verbs, adjectives, numerals, and loanwords, the effects were overshadowed by a large number of queries comprising conventional Mongolian nouns. Finally, our method did not outperform the method using the correct lemmatization. We used the paired t-test for statistical testing, which investigates whether the difference in performance is meaningful or simply because of chance (Keen, 1992) . Table 3 shows the results, in which \"<\" and \"<<\" indicate that the difference of two results was significant at the 5% and 1% levels, respectively, and \"-\" indicates that the difference of two results was not significant.",
                "cite_spans": [
                    {
                        "start": 378,
                        "end": 401,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 406,
                        "end": 427,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 479,
                        "end": 502,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 555,
                        "end": 576,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 648,
                        "end": 668,
                        "text": "Khaltar et al.(2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1113,
                        "end": 1125,
                        "text": "(Keen, 1992)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 40,
                        "end": 47,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 1128,
                        "end": 1135,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "Looking at Table 3 , the differences between no lemmatization and any lemmatization method, such as Sanduijav et al. (2005) , Khaltar et al. (2006) , our method, and correct lemmatization, were statistically significant in MAP for KQ. However, because the MAP value of no lemmatization was improved for LQ, the differences between no lemmatization and the lemmatization methods were less significant than those for KQ. The difference between Sanduijav et al. (2005) and our method was statistically significant in MAP for both KQ and LQ. However, the difference between Khaltar et al. (2006) and our method was not significant in MAP for both KQ and LQ. Although, the difference between our method and correct lemmatization was statistically significant in MAP for KQ, the difference was not significant in MAP for LQ. ",
                "cite_spans": [
                    {
                        "start": 100,
                        "end": 123,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 126,
                        "end": 147,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 442,
                        "end": 465,
                        "text": "Sanduijav et al. (2005)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 570,
                        "end": 591,
                        "text": "Khaltar et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 11,
                        "end": 18,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the effectiveness of lemmatization in information retrieval",
                "sec_num": "5.3"
            },
            {
                "text": "In Modern Mongolian, a content word can potentially be inflected when concatenated with suffixes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Identifying the original forms of content words is crucial for natural language processing and information retrieval.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "In this paper, we proposed a lemmatization method for Modern Mongolian. We enhanced the lemmatization method proposed by Khaltar et al.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            }
        ],
        "back_matter": [
            {
                "text": " (2006). We targeted nouns, verbs, adjectives, and numerals. We also improved the lemmatization for loanwords.We evaluated our lemmatization method experimentally. The accuracy of our method was higher than those of existing methods. We also applied our lemmatization method to information retrieval and improved the retrieval accuracy.Future work includes using a part of speech tagger because the part of speech information is effective for lemmatization.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Indexing the Indonesian Web: Language identification and miscellaneous issues",
                "authors": [
                    {
                        "first": "S N",
                        "middle": [],
                        "last": "Vinsensius Berlian Vega",
                        "suffix": ""
                    },
                    {
                        "first": "St\u00e9phane",
                        "middle": [],
                        "last": "Bressan",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Tenth International World Wide Web Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vinsensius Berlian Vega S N and St\u00e9phane Bressan. 2001. Indexing the Indonesian Web: Language iden- tification and miscellaneous issues. Tenth Interna- tional World Wide Web Conference, Hong Kong.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Natural Language Processing Using Very Large Corpora",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Brill",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eric Brill. 1997. Natural Language Processing Using Very Large Corpora. Kluwer Academic Press.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Improving Precision in Information Retrieval for Swedish using Stemming",
                "authors": [
                    {
                        "first": "Johan",
                        "middle": [],
                        "last": "Carlberger",
                        "suffix": ""
                    },
                    {
                        "first": "Hercules",
                        "middle": [],
                        "last": "Dalianis",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Hassel",
                        "suffix": ""
                    },
                    {
                        "first": "Ola",
                        "middle": [],
                        "last": "Knutsson",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of NODALIDA '01 -13th Nordic Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Johan Carlberger, Hercules Dalianis, Martin Hassel, and Ola Knutsson. 2001. Improving Precision in Informa- tion Retrieval for Swedish using Stemming. Proceed- ings of NODALIDA '01 -13th Nordic Conference on Computational Linguistics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Mongolian morphological analysis using ChaSen",
                "authors": [
                    {
                        "first": "Terumasa",
                        "middle": [],
                        "last": "Ehara",
                        "suffix": ""
                    },
                    {
                        "first": "Suzushi",
                        "middle": [],
                        "last": "Hayata",
                        "suffix": ""
                    },
                    {
                        "first": "Nobuyuki",
                        "middle": [],
                        "last": "Kimura",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 10th Annual Meeting of the Association for Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "709--712",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Terumasa Ehara, Suzushi Hayata, and Nobuyuki Kimu- ra. 2004. Mongolian morphological analysis using ChaSen. Proceedings of the 10th Annual Meeting of the Association for Natural Language Processing, pp. 709-712. (In Japanese).",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Stemming and n-gram matching for term conflation in Turkish texts",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "\u00c7una",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "F"
                        ],
                        "last": "Ekmek\u00e7ioglu",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Lynch",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Willett",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Information Research News",
                "volume": "7",
                "issue": "1",
                "pages": "2--6",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "\u00c7una F. Ekmek\u00e7ioglu, Michael F. Lynch, and Peter Willett. 1996. Stemming and n-gram matching for term conflation in Turkish texts. Information Re- search News, Vol. 7, No. 1, pp. 2-6.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Stemming algorithms -a case study for detailed evaluation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [
                            "A"
                        ],
                        "last": "Hull",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Journal of the American Society for Information Science and Technology",
                "volume": "47",
                "issue": "1",
                "pages": "70--84",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David A. Hull. 1996. Stemming algorithms -a case study for detailed evaluation. Journal of the Ameri- can Society for Information Science and Technology, Vol. 47, No. 1, pp. 70-84.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Finite State Morphology",
                "authors": [
                    {
                        "first": "Lauri",
                        "middle": [],
                        "last": "Karttunen",
                        "suffix": ""
                    },
                    {
                        "first": "Kenneth",
                        "middle": [
                            "R"
                        ],
                        "last": "Beesley",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lauri Karttunen and Kenneth R. Beesley. 2003. Finite State Morphology. CSLI Publications. Stanford.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Presenting results of experimental retrieval comparisons",
                "authors": [
                    {
                        "first": "Micheal",
                        "middle": [
                            "E"
                        ],
                        "last": "Keen",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Information Processing and Management",
                "volume": "28",
                "issue": "4",
                "pages": "491--502",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Micheal E. Keen. 1992. Presenting results of experi- mental retrieval comparisons. Information Processing and Management, Vol. 28, No. 4, pp. 491-502.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Extracting loanwords from Mongolian corpora and producing a Japanese-Mongolian bilingual dictionary",
                "authors": [
                    {
                        "first": "Atsushi",
                        "middle": [],
                        "last": "Badam-Osor Khaltar",
                        "suffix": ""
                    },
                    {
                        "first": "Tetsuya",
                        "middle": [],
                        "last": "Fujii",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ishikawa",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Badam-Osor Khaltar, Atsushi Fujii, and Tetsuya Ishi- kawa. 2006. Extracting loanwords from Mongolian corpora and producing a Japanese-Mongolian bilin- gual dictionary. Proceedings of the 21st International Conference on Computational Linguistics and 44th",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Annual Meeting of the Association for Computational Linguistics",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "657--664",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Annual Meeting of the Association for Computational Linguistics, pp. 657-664.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Stemming and Lemmatization in the Clustering of Finnish Text Documents",
                "authors": [
                    {
                        "first": "Tuomo",
                        "middle": [],
                        "last": "Korenius",
                        "suffix": ""
                    },
                    {
                        "first": "Jorma",
                        "middle": [],
                        "last": "Laurikkala",
                        "suffix": ""
                    },
                    {
                        "first": "Kalervo",
                        "middle": [],
                        "last": "J\u00e4rvelin",
                        "suffix": ""
                    },
                    {
                        "first": "Martti",
                        "middle": [],
                        "last": "Juhola",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the thirteenth Association for Computing Machinery international conference on Information and knowledge management",
                "volume": "",
                "issue": "",
                "pages": "625--633",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tuomo Korenius, Jorma Laurikkala, Kalervo J\u00e4rvelin, and Martti Juhola. 2004. Stemming and Lemmatization in the Clustering of Finnish Text Documents. Proceedings of the thirteenth Associa- tion for Computing Machinery international confe- rence on Information and knowledge management. pp. 625-633.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "The measurement of observer agreement for categorical data",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Richard",
                        "suffix": ""
                    },
                    {
                        "first": "Gary",
                        "middle": [
                            "G"
                        ],
                        "last": "Landis",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Koch",
                        "suffix": ""
                    }
                ],
                "year": 1977,
                "venue": "Biometrics",
                "volume": "33",
                "issue": "1",
                "pages": "159--174",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard J. Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, Vol. 33, No. 1, pp. 159-174.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Improving Stemming for Arabic Information Retrieval: Light Stemming and Co-occurrence Analysis",
                "authors": [
                    {
                        "first": "Leah",
                        "middle": [
                            "S"
                        ],
                        "last": "Larkey",
                        "suffix": ""
                    },
                    {
                        "first": "Lisa",
                        "middle": [],
                        "last": "Ballesteros",
                        "suffix": ""
                    },
                    {
                        "first": "Margaret",
                        "middle": [
                            "E"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval",
                "volume": "",
                "issue": "",
                "pages": "275--282",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Leah S. Larkey, Lisa Ballesteros, and Margaret E. Con- nel. 2002. Improving Stemming for Arabic Informa- tion Retrieval: Light Stemming and Co-occurrence Analysis. Proceedings of the 25th annual interna- tional ACM SIGIR conference on Research and de- velopment in information retrieval, pp. 275-282.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A Novel Method for Stemmer Generation Based on Hidden Markov Models",
                "authors": [
                    {
                        "first": "Massimo",
                        "middle": [],
                        "last": "Melucci",
                        "suffix": ""
                    },
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Orio",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the twelfth international conference on Information and knowledge management",
                "volume": "",
                "issue": "",
                "pages": "131--138",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Massimo Melucci and Nicola Orio. 2003. A Novel Me- thod for Stemmer Generation Based on Hidden Mar- kov Models. Proceedings of the twelfth international conference on Information and knowledge manage- ment, pp. 131-138.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "The effectiveness of stemming for natural-language access to Slovene textual data",
                "authors": [
                    {
                        "first": "Mirko",
                        "middle": [],
                        "last": "Popovi\u010d",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Willett",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Journal of the American Society for Information Science and Technology",
                "volume": "43",
                "issue": "5",
                "pages": "384--390",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mirko Popovi\u010d and Peter Willett. 1992. The effective- ness of stemming for natural-language access to Slo- vene textual data. Journal of the American Society for Information Science and Technology, Vol. 43, No. 5, pp. 384-390.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "An algorithm for suffix stripping. Program",
                "authors": [
                    {
                        "first": "Martin",
                        "middle": [
                            "F"
                        ],
                        "last": "Porter",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "14",
                "issue": "",
                "pages": "130--137",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Martin F. Porter. 1980. An algorithm for suffix strip- ping. Program, Vol. 14, No. 3, pp. 130-137.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Okapi at TREC-3",
                "authors": [
                    {
                        "first": "Stephen",
                        "middle": [
                            "E"
                        ],
                        "last": "Robertson",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Walker",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Micheline",
                        "middle": [],
                        "last": "Hancock-Beaulieu",
                        "suffix": ""
                    },
                    {
                        "first": "Mike",
                        "middle": [],
                        "last": "Gatford",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the Third Text REtrieval Conference",
                "volume": "",
                "issue": "",
                "pages": "109--126",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1995. Okapi at TREC-3. Proceedings of the Third Text REtrieval Conference, NIST Special Publication 500-226. pp. 109-126.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Mongolian phrase generation and morphological analysis based on phonological and morphological constraints",
                "authors": [
                    {
                        "first": "Enkhbayar",
                        "middle": [],
                        "last": "Sanduijav",
                        "suffix": ""
                    },
                    {
                        "first": "Takehito",
                        "middle": [],
                        "last": "Utsuro",
                        "suffix": ""
                    },
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Sato",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Japanese)",
                "volume": "12",
                "issue": "",
                "pages": "185--205",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Enkhbayar Sanduijav, Takehito Utsuro, and Satoshi Sato. 2005. Mongolian phrase generation and mor- phological analysis based on phonological and mor- phological constraints. Journal of Natural Language Processing, Vol. 12, No. 5, pp. 185-205. (In Japa- nese) .",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "On designing an automated Malaysian stemmer for the Malay language",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Sock",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Tai",
                        "suffix": ""
                    },
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Cheng",
                        "suffix": ""
                    },
                    {
                        "first": "Noor",
                        "middle": [
                            "A"
                        ],
                        "last": "Ong",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Abdullah",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the fifth international workshop on information retrieval with Asian languages",
                "volume": "",
                "issue": "",
                "pages": "207--208",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sock Y. Tai, Cheng O. Ong, and Noor A. Abdullah. 2000. On designing an automated Malaysian stem- mer for the Malay language. Proceedings of the fifth international workshop on information retrieval with Asian languages, Hong Kong, pp. 207-208.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Mongolian grammar for grades I-IV",
                "authors": [
                    {
                        "first": "Bayarmaa",
                        "middle": [],
                        "last": "Ts",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bayarmaa Ts. 2002. Mongolian grammar for grades I- IV. (In Mongolian).",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Building a Question Answering Test Collection",
                "authors": [
                    {
                        "first": "Ellen",
                        "middle": [
                            "M"
                        ],
                        "last": "Voorhees",
                        "suffix": ""
                    },
                    {
                        "first": "Dawn",
                        "middle": [
                            "M"
                        ],
                        "last": "Tice",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
                "volume": "",
                "issue": "",
                "pages": "200--207",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ellen M. Voorhees and Dawn M. Tice. 2000. Building a Question Answering Test Collection. Proceedings of the 23rd Annual International ACM SIGIR Confe- rence on Research and Development in Information Retrieval, pp. 200-207.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Corpus-based stemming using co-occurrence of word variants",
                "authors": [
                    {
                        "first": "Jinxi",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Bruce",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Croft",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "ACM Transactions on Information Systems",
                "volume": "16",
                "issue": "1",
                "pages": "61--81",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jinxi Xu and Bruce W. Croft. 1998. Corpus-based stemming using co-occurrence of word variants. ACM Transactions on Information Systems, Vol. 16, No. 1, pp. 61-81.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "num": null,
                "text": "Inflection types of content words in Mongolian phrases.",
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "num": null,
                "text": "in the phrase Remove suffixes and extract a content word Check if the content word is a verb Detect a suffix in the phrase Remove suffixes and extract a content word Overview of our lemmatization method for Mongolian. Lemmatization for noun phrases Vowel insertion rule Insert an eliminated vowel Insert an eliminated vowel Identify loanword Vowel insertion rule Lemmatization for verb phrases",
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "num": null,
                "text": "Fragment of verb suffix segmentation rule.",
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF4": {
                "num": null,
                "text": "Fragment of suffix segmentation rules for loanwords.",
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF5": {
                "num": null,
                "text": "Errors of our lemmatization method.",
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF6": {
                "num": null,
                "text": "Example of technical abstract.",
                "uris": null,
                "type_str": "figure"
            },
            "TABREF0": {
                "num": null,
                "content": "<table><tr><td/><td>#Phrase types</td><td>Sanduijav et al. (2005)</td><td>Khaltar et al. (2006)</td><td>Our method</td></tr><tr><td>Noun</td><td>13,016</td><td>57.6</td><td>87.7</td><td>92.5</td></tr><tr><td>Verb</td><td>1,797</td><td>24.5</td><td>23.8</td><td>24.5</td></tr><tr><td>Adjective</td><td>609</td><td>82.6</td><td>83.5</td><td>83.9</td></tr><tr><td>Numeral</td><td>56</td><td>41.1</td><td>80.4</td><td>81.2</td></tr><tr><td>Total</td><td>15,478</td><td>63.2</td><td>72.3</td><td>78.2</td></tr></table>",
                "text": "Accuracy of lemmatization (%).",
                "html": null,
                "type_str": "table"
            },
            "TABREF2": {
                "num": null,
                "content": "<table><tr><td/><td>Keyword query</td><td>List query</td></tr><tr><td>No lemmatization</td><td>0.2312</td><td>0.2766</td></tr><tr><td>Sanduijav et al. (2005)</td><td>0.2882</td><td>0.2834</td></tr><tr><td>Khaltar et al. (2006)</td><td>0.3134</td><td>0.3127</td></tr><tr><td>Our method</td><td>0.3149</td><td>0.3114</td></tr><tr><td>Correct lemmatization</td><td>0.3268</td><td>0.3187</td></tr></table>",
                "text": "MAP of lemmatization methods.",
                "html": null,
                "type_str": "table"
            },
            "TABREF3": {
                "num": null,
                "content": "<table><tr><td/><td>Keyword query</td><td>List query</td></tr><tr><td>No lemmatization vs. Correct lemmatization</td><td>&lt;&lt;</td><td>&lt;</td></tr><tr><td>No lemmatization vs. Sanduijav et al. (2005)</td><td>&lt;&lt;</td><td>-</td></tr><tr><td>No lemmatization vs. Khaltar et al. (2006)</td><td>&lt;&lt;</td><td>&lt;</td></tr><tr><td>No lemmatization vs. Our method</td><td>&lt;&lt;</td><td>&lt;</td></tr><tr><td>Sanduijav et al. (2005) vs. Our method</td><td>&lt;&lt;</td><td>&lt;</td></tr><tr><td>Khaltar et al. (2006) vs. Our method</td><td>-</td><td>-</td></tr><tr><td>Our method vs. Correct lemmatization</td><td>&lt;</td><td>-</td></tr></table>",
                "text": ": t-test result of the differences between lemmatization methods.",
                "html": null,
                "type_str": "table"
            }
        }
    }
}