File size: 82,507 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
{
    "paper_id": "I08-1003",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:41:18.988253Z"
    },
    "title": "A Hybrid Approach to the Induction of Underlying Morphology",
    "authors": [
        {
            "first": "Michael",
            "middle": [],
            "last": "Tepper",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Washington Seattle",
                "location": {
                    "postCode": "98195",
                    "region": "WA"
                }
            },
            "email": "mtepper@u.washington.edu"
        },
        {
            "first": "Fei",
            "middle": [],
            "last": "Xia",
            "suffix": "",
            "affiliation": {},
            "email": "fxia@u.washington.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present a technique for refining a baseline segmentation and generating a plausible underlying morpheme segmentation by integrating handwritten rewrite rules into an existing state-of-the-art unsupervised morphological induction procedure. Performance on measures which consider surface-boundary accuracy and underlying morpheme consistency indicates this technique leads to improvements over baseline segmentations for English and Turkish word lists.",
    "pdf_parse": {
        "paper_id": "I08-1003",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present a technique for refining a baseline segmentation and generating a plausible underlying morpheme segmentation by integrating handwritten rewrite rules into an existing state-of-the-art unsupervised morphological induction procedure. Performance on measures which consider surface-boundary accuracy and underlying morpheme consistency indicates this technique leads to improvements over baseline segmentations for English and Turkish word lists.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The primary goal of unsupervised morphological induction (UMI) is the simultaneous induction of a reasonable morphological lexicon as well as an optimal segmentation of a corpus of words, given that lexicon. The majority of existing approaches employ statistical modeling towards this goal, but differ with respect to how they learn or refine the morphological lexicon. While some approaches involve lexical priors, either internally motivated or motivated by the minimal description length (MDL) criterion, some utilize heuristics. Pure maximum likelihood (ML) approaches may refine the lexicon with heuristics in lieu of explicit priors (Creutz and Lagus, 2004) , or not make categorical refinements at all concerning which morphs are included, only probabilistic refinements through a hierarchical EM procedure (Peng and Schuurmans, 2001) . Approaches that optimize the lexicon with respect to priors come in several flavors. There are basic maximum a priori (MAP) approaches that try to maximize the probability of the lexicon against linguistically motivated priors (Deligne and Bimbot, 1997; Snover and Brent, 2001 ; Creutz and Lagus, 2005) . An alternative to MAP, MDL approaches use their own set of priors motivated by complexity theory. These studies attempt to minimize lexicon complexity (bit-length in crude MDL) while simultaneously minimizing the complexity (by maximizing the probability) of the corpus given the lexicon (de Marcken, 1996; Goldsmith, 2001; Creutz and Lagus, 2002) .",
                "cite_spans": [
                    {
                        "start": 639,
                        "end": 663,
                        "text": "(Creutz and Lagus, 2004)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 814,
                        "end": 841,
                        "text": "(Peng and Schuurmans, 2001)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 1071,
                        "end": 1097,
                        "text": "(Deligne and Bimbot, 1997;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1098,
                        "end": 1120,
                        "text": "Snover and Brent, 2001",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1123,
                        "end": 1146,
                        "text": "Creutz and Lagus, 2005)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 1437,
                        "end": 1455,
                        "text": "(de Marcken, 1996;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1456,
                        "end": 1472,
                        "text": "Goldsmith, 2001;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1473,
                        "end": 1496,
                        "text": "Creutz and Lagus, 2002)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Morphological Induction",
                "sec_num": "1.1"
            },
            {
                "text": "Many of the approaches mentioned above utilize a simplistic unigram model of morphology to produce the segmentation of the corpus given the lexicon. Substrings in the lexicon are proposed as morphs within a word based on frequency alone, independently of phrase-, word-and morph-surroundings (de Marcken, 1996; Peng and Schuurmans, 2001; Creutz and Lagus, 2002) . There are many approaches, however, which further constrain the segmentation procedure. The work by Creutz and Lagus (2004; 2005; constrains segmentation by accounting for morphotactics, first assigning mophotactic categories (prefix, suffix, and stem) to baseline morphs, and then seeding and refining an HMM using those category assignments. Other more structured models include Goldsmith's (2001) work which, instead of inducing morphemes, induces morphological signatures like {\u00f8, s, ed, ing} for English regular verbs. Some techniques constrain possible analyses by employing approximations for morphological meaning or usage to prevent false derivations (like singed = sing + ed ). There is work by Schone and Jurafsky (2000; 2001) where meaning is proxied by wordand morph-context, condensed via LSA. Yarowsky and Wicentowski (2000) and Yarowsky et al. (2001) use expectations on relative frequency of aligned inflected-word, stem pairs, as well as POS context features, both of which approximate some sort of meaning.",
                "cite_spans": [
                    {
                        "start": 292,
                        "end": 310,
                        "text": "(de Marcken, 1996;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 311,
                        "end": 337,
                        "text": "Peng and Schuurmans, 2001;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 338,
                        "end": 361,
                        "text": "Creutz and Lagus, 2002)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 464,
                        "end": 487,
                        "text": "Creutz and Lagus (2004;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 488,
                        "end": 493,
                        "text": "2005;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 745,
                        "end": 763,
                        "text": "Goldsmith's (2001)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1069,
                        "end": 1095,
                        "text": "Schone and Jurafsky (2000;",
                        "ref_id": null
                    },
                    {
                        "start": 1096,
                        "end": 1101,
                        "text": "2001)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1172,
                        "end": 1203,
                        "text": "Yarowsky and Wicentowski (2000)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 1208,
                        "end": 1230,
                        "text": "Yarowsky et al. (2001)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Morphological Induction",
                "sec_num": "1.1"
            },
            {
                "text": "Allomorphy, or allomorphic variation, is the process by which a morpheme varies (orthographically or phonologically) in particular contexts, as constrained by a grammar. 1 To our knowledge, there is only handful of work within UMI attempting to integrate allomorphy into morpheme discovery. A notable approach is the Wordframe model developed by Wicentowski (2002) , which performs weighted edits on root-forms, given context, as part of a larger similarity alignment model for discovering <inflected-form, root-form> pairs.",
                "cite_spans": [
                    {
                        "start": 346,
                        "end": 364,
                        "text": "Wicentowski (2002)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Allomorphy in UMI",
                "sec_num": "1.2"
            },
            {
                "text": "Morphological complexity is fixed by a template; the original was designed for inflectional morphologies and thus constrained to finding an optional affix on either side of a stem. Such a template would be difficult to design for agglutinative morphologies like Turkish or Finnish, where stems are regularly inflected by chains of affixes. Still, it can be extended. A notable recent extension accounts for phenomena like infixation and reduplication in Filipino (Cheng and See, 2006) .",
                "cite_spans": [
                    {
                        "start": 463,
                        "end": 484,
                        "text": "(Cheng and See, 2006)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Allomorphy in UMI",
                "sec_num": "1.2"
            },
            {
                "text": "In terms of allomorphy, the approach succeeds at generalizing allomorphic patterns, both steminternally and at points of affixation. A major drawback is that, so far, it does not account for affix allomorphy involving character replacement-that is, beyond point-of-affixation epentheses or deletions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Allomorphy in UMI",
                "sec_num": "1.2"
            },
            {
                "text": "Our approach aims to integrate a rule-based component consisting of hand-written rewrite rules into an otherwise unsupervised morphological induction procedure in order to refine the segmentations it produces.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our Approach",
                "sec_num": "1.3"
            },
            {
                "text": "The major contribution of this work is a rulebased component which enables simple encoding of context-sensitive rewrite rules for the analysis of induced morphs into plausible underlying morphemes. 2 A rule has the form general form:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-Sensitive Rewrite Rules",
                "sec_num": "1.3.1"
            },
            {
                "text": "\u03b1 underlying \u2192 \u03b2 surface / \u03b3 l. context _ \u03b4 r. context (1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-Sensitive Rewrite Rules",
                "sec_num": "1.3.1"
            },
            {
                "text": "It is also known as a SPE-style rewrite rule, part of the formal apparatus to introduced by Chomsky and Halle (1968) to account for regularities in phonology. Here we use it to describe orthographic patterns. Mapping morphemes to underlying forms with context-sensitive rewrite rules allows us to peer through the fragmentation created by allomorphic variation. Our experiments will show that this has the effect of allowing for more unified, consistent morphemes while simultaneously making surface boundaries more transparent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-Sensitive Rewrite Rules",
                "sec_num": "1.3.1"
            },
            {
                "text": "For example, take the English multipurpose inflectional suffix \u2022s, normally written as \u2022s, but as \u2022es after sibilants (s,sh, ch, . . . ). We can write the following SPE-style rule to account for its variation. This rule says, \"Insert an e (map nothing to e) following a character marked as a sibilant (+SIB) and a morphological boundary (+), at the focus position (_), immediately preceding an s.\" In short, it enables the mapping of the underlying form \u2022s to \u2022es by inserting an e before s where appropriate. When this rule is reversed to produce underlying analyses, the \u2022es variant in such words as glasses, matches, swishes, and buzzes can be identified with the \u2022s variant in words like plots, sits, quakes, and nips.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-Sensitive Rewrite Rules",
                "sec_num": "1.3.1"
            },
            {
                "text": "Before the start of the procedure, there is a preprocessing step to derive an initial segmentation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Procedure",
                "sec_num": "1.3.2"
            },
            {
                "text": "This segmentation is fed to the EM Stage, the goal of which is to find the maximum probability segmentation of a wordlist into underlying morphemes. First, analyses of initial segments are produced by rule. Then, their frequency is used to determine their likelihood as underlying morphemes. Finally, probability of a segmentation into underlying morphemes is maximized.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Procedure",
                "sec_num": "1.3.2"
            },
            {
                "text": "The output segmentation feeds into the Split Stage, where heuristics are used to split large, highfrequency segments that fail to break into smaller underlying morphemes during the EM algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Procedure",
                "sec_num": "1.3.2"
            },
            {
                "text": "A flowchart of the procedure is given in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 41,
                        "end": 49,
                        "text": "Figure 1",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Procedure",
                "sec_num": "2"
            },
            {
                "text": "Preprocessing We use the Categories-MAP algorithm developed by Creutz and Lagus (2005; to produce an initial morphological segmentation. Here, a segmentation is optimized by maximum a posteriori estimate given priors on length, frequency, and usage of morphs stored in the model. Their procedure begins with morphological tags indicating basic morphotactics (prefix, stem, suffix, noise) being assigned heuristically to a baseline segmentation. That tag assignment is then used to seed an HMM.",
                "cite_spans": [
                    {
                        "start": 63,
                        "end": 86,
                        "text": "Creutz and Lagus (2005;",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Procedure",
                "sec_num": "2"
            },
            {
                "text": "Morfessor 0.9",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocess",
                "sec_num": null
            },
            {
                "text": "Step 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Categories-MAP",
                "sec_num": null
            },
            {
                "text": "Step 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Propose Underlying Analyses",
                "sec_num": null
            },
            {
                "text": "Step 3 Re-segment Wordlist",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Estimate HMM Probabilities",
                "sec_num": null
            },
            {
                "text": "Rewrite Rules analyses probs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Estimate HMM Probabilities",
                "sec_num": null
            },
            {
                "text": "Orig. Wordlist",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Estimate HMM Probabilities",
                "sec_num": null
            },
            {
                "text": "Step 4 Re-tag Segmentation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SPLIT STAGE",
                "sec_num": null
            },
            {
                "text": "Step 7",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rewrite Rules",
                "sec_num": null
            },
            {
                "text": "Re-segment (Split) Morphs probs. Step 6",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rewrite Rules",
                "sec_num": null
            },
            {
                "text": "Estimate HMM Probabilities",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rewrite Rules",
                "sec_num": null
            },
            {
                "text": "Step 5 Optimal segmentation of a word is simultaneously the best tag and morph 3 sequence given that word. The contents of the model are optimized with respect to length, frequency, and usage priors during splitting and joining phases. The final output is a tagged segmentation of the input word-list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rewrite Rules",
                "sec_num": null
            },
            {
                "text": "The model we train is a modified version of the morphological HMM from the work of Lagus (2004-2006) , where a word w consists of a sequence of morphs generated by a morphologicalcategory tag sequence. The difference between their HMM and ours is that theirs emits surface morphs, while ours emits underlying morphemes. Morphemes may either be analyses proposed by rule or surface morphs acting as morphemes. We do not modify the tags Creutz and Lagus use (prefix, stem, suffix, and noise).",
                "cite_spans": [
                    {
                        "start": 83,
                        "end": 100,
                        "text": "Lagus (2004-2006)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "We proceed by EM, initialized by the preprocessed segmentation. Rule-generated underlying analyses are produced (Step 1), and used to estimate the emission probability P (u i |t i ) and transition probability",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "P (t i |t i\u22121 ) (",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Step 2). In successive E-steps, Steps 1 and 2 are repeated. The M-step (Step 3) involves finding the maximum probability decoding of each word according to Eq (6), i.e. maximum probability tag and morpheme sequence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Step 1 -Derive Underlying Analyses In this step, handwritten context-sensitive rewrite rules derive context-relevant analyses for morphs in the preprocessed segmentation. These analyses are produced by a set of ordered rules that propose dele-3 A morph is a linguistic morpheme as it occurs in production, i.e. as it occurs in a surface word. tions, insertions, or substitutions when triggered by the proper characters around a segmentation boundary. 4 A rule applies wherever contextually triggered, from left to right, and may apply more than once to the same word. To prevent the runaway application of certain rules, a rule may not apply to its own output. The result of applying a rule is a (possibly spelling-changed) segmented word, which is fed to the next rule. This enables multi-step analyses by using rules designed specifically to apply to the outputs of other rules. See Figure 2 for a small example.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 885,
                        "end": 893,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Step 2 -Estimate HMM Probabilities Transition probabilities P (t i |t i\u22121 ) are estimated by maximum likelihood, given a tagged input segmentation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Emission probabilities P (u i |t i ) are also estimated by maximum likelihood, but the situation is slightly more complex; the probability of morphemes u i are estimated according to frequencies of association (coindexation) with surface morphs s i and tags t i .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Furthermore an underlying morpheme u i can either be identical to its associated surface morph s i when no rules apply, or be a rule-generated analysis. For the sake of clarity, we call the former u i and the latter u i , as defined below: Figure 2 : Underlying analyses for a segmentation are generated by passing it through context-sensitive rewrite rules. Rules apply to some morphs (e.g., citi \u2192 city) but not to others (e.g., glass \u2192 glass). u i . The probability of u i given tag t i is calculated by summing over all allomorphs s of u i the probability that u i realizes s in the context of tag t i :",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 240,
                        "end": 248,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "u i = u i if u i = s i u i otherwise",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "P (u i |t i ) = s\u2208allom.-of(ui) P (u i , s|t i ) (3) = s\u2208allom.-of(ui) P (u i |s, t i )P (s|t i ) (4)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Both Eq (3) and Eq (4) are trivial to estimate with counting on our input from Step 1 (see Figure  2 ). We show (4) because it has the term P (u i |s, t i ), which may be used for thresholding and discounting terms of the sum where u i is rarely associated with a particular allomorph and tag. In the future, such discounting may be useful to filter out noise generated by noisy or permissive rules. So far, this type of discounting has not improved results.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 91,
                        "end": 100,
                        "text": "Figure  2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Step 3 -Resegment Word List Next we resegment the word list into underlying morphemes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Searching for the best breakdown of a word w into morpheme sequence u and tag sequence t, we maximize the probability of the following formula:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P (w, u, t) = P (w|u, t)P (u, t) = P (w|u, t)P (u|t)P (t)",
                        "eq_num": "(5)"
                    }
                ],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "To simplify, we assume that P (w|u, t) is equal to one. 5 With this assumption in mind, Eq (5) reduces to P (u|t)P (t). With independence assumptions and a local time horizon, we estimate:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "argmax u,t P (u|t)P (t) \u2248 argmax u,t n i=1 P (u i |t i )P (t i |t i\u22121 ) (6) 5",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "In other words, we make the assumption that a sequence of underlying morphemes and tags corresponds to just one word. This assumption may need revision in cases where morphemes can optionally undergo the types of spelling changes we are trying to encode; this has not been the case for the languages under investigation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "The search for the maximum probability tag and morph sequence in Eq (6) is carried out by a modified version of the Viterbi algorithm. The maximum probability segmentation for a given word may be a mixture of both types of underlying morpheme, u i and u i . Also, wherever we have a choice between emitting u i , identical to the surface form, or u i , an analysis with rule-proposed changes, the highest probability of the two is always selected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EM Stage",
                "sec_num": "2.1"
            },
            {
                "text": "Many times, large morphs have substructure and yet are too frequent to be split when segmented by the HMM in the EM Stage. To overcome this, we approximately follow the heuristic procedure 6 laid out by Creutz and Lagus (2004) , encouraging splitting of larger morphs into smaller underlying morphemes. This process has the danger of introducing many false analyses, so first the segmentation must be re-tagged (Step 4) to identify which morphemes are noise and should not be used. Once we re-tag, we re-analyze morphs in the surface segmentation (Step 5) and re-estimate HMM probabilities (Step 6). (for Steps 5 and 6, refer to Steps 1 and 2). Finally, we use these HMM probabilities to split morphs (Step 7).",
                "cite_spans": [
                    {
                        "start": 203,
                        "end": 226,
                        "text": "Creutz and Lagus (2004)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Split Stage",
                "sec_num": "2.2"
            },
            {
                "text": "Step 4 -Re-tag the Segmentation To identify noise morphemes, we estimate a distribution P (CAT |u i ) for three true categories CAT (prefix, stem, or suffix) and one noise category; we then assign categories randomly according to this distribution. Stem probabilities are proportional to stemlength, while affix probabilities are proportional to left-or right-perplexity. The probability of true categories are also tied to the value of sigmoid-cutoff parameters, the most important of which is b, which thresholds the probability of both types of affix (prefix and suffix).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Split Stage",
                "sec_num": "2.2"
            },
            {
                "text": "The probability of the noise category is conversely related to the product of true category probabilities; when true categories are less probable, noise becomes more probable. Thus, adjusting parameters like b can increase or decrease the probability of noise.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Split Stage",
                "sec_num": "2.2"
            },
            {
                "text": "Step 7 -Split Morphs In this step, we examine <morph, tag> pairs in the segmentation to see if a split into sub-morphemes is warranted. We constrain this process by restricting splitting to stems (with the option to split affixes), and by splitting into restricted sequences of tags, particularly avoiding noise. We also use parameter b in Step 4 as a way to discourage excessive splitting by tagging more morphemes as noise. Stems are split into the sequence: (PRE * STM SUF * ). Affixes (prefixes and suffixes) are split into other affixes of the same category. Whether to split affixes depends on typological properties of the language. If a language has agglutinative suffixation, for example, we hand-set a parameter to allow suffix-splitting.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Split Stage",
                "sec_num": "2.2"
            },
            {
                "text": "When examining a morph for splitting, we search over all segmentations with at least one split, and choose the one that is both optimal according to Eq (6) and does not violate our constraints on what category sequences are allowed for its category. We end this step by returning to the EM Stage, where another cycle of EM is performed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Split Stage",
                "sec_num": "2.2"
            },
            {
                "text": "In this section we report and discuss development results for English and Turkish. We also report finaltest results for both languages. Results for the preprocessed segmentation are consistently used as a baseline. In order to isolate the effect of the rewrite rules, we also compare against results taken on a parallel set of experiments, run with all the same parameters but without rule-generated underlying morphemes, i.e. without morphemes of type u i . But before we get to these results, we will describe the conditions of our experiments. First we introduce the evaluation metrics and data used, and then detail any parameters set during development.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments and Results",
                "sec_num": "3"
            },
            {
                "text": "We use two procedures for evaluation, described in the Morpho Challenge '05 and '07 Competition Reports (Kurimo et al., 2006; Kurimo et al., 2007) . Both procedures use gold-standards created with commercially available morphological analyzers for each language. Each procedure is associated with its own F-score-based measure.",
                "cite_spans": [
                    {
                        "start": 104,
                        "end": 125,
                        "text": "(Kurimo et al., 2006;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 126,
                        "end": 146,
                        "text": "Kurimo et al., 2007)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "3.1"
            },
            {
                "text": "The first was used in Morpho Challenge '05, and measures the extent to which boundaries match between the surface-layer of our segmentations and gold-standard surface segmentations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "3.1"
            },
            {
                "text": "The second was used in Morpho Challenge '07 and measures the extent to which morphemes match between the underlying-layer of our segmentations and gold-standard underlying analyses. The F-score here is not actually on matched morphemes, but instead on matched morpheme-sharing word-pairs. A point is given whenever a morpheme-sharing wordpair in the gold-standard segmentation also shares morphemes in the test segmentation (for recall), and vice-versa for precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "3.1"
            },
            {
                "text": "Training Data The data-sets used for training were provided by the Helsinki University of Technology in advance of the Morpho Challenge '07 and were downloaded by the authors from the contest website 7 . According to the website, they were compiled from the University of Leipzig Wortschatz Corpora.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "3.2"
            },
            {
                "text": "Tokens Types English 3 \u00d7 10 6 6.22 \u00d7 10 7 3.85 \u00d7 10 5 Turkish 1 \u00d7 10 6 1.29 \u00d7 10 7 6.17 \u00d7 10 5 Test Data For final testing, we use the goldstandard data reserved for final evaluation in the Morpho Challenge '07 contest. The gold-standard consists of approximately 1.17 \u00d7 10 5 English and 3.87 \u00d7 10 5 Turkish analyzed words, roughly a tenth the size of training word-lists. Word pairs that exist in both the training and gold standard are used for evaluation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentences",
                "sec_num": null
            },
            {
                "text": "There are two sets of parameters used in this experiment. First, there are parameters used to produce the initial segmentation. They were set as suggested in Cruetz and Lagus (2005) , with parameter b tuned on development data. Then there are parameters used for the main procedure. Here we have rewrite rules, numerical parameters, and one typology parameter. Rewrite rules and any orthographic features they use were culled from linguistic literature. We currently have 6 rules for English and 10 for Turkish; See Appendix A.1 for the full set of English rules used. Numerical parameters were set as suggested in Cruetz and Lagus (2004) , and following their lead we tuned b on development data; we show development results for the following values: b = 100, 300, and 500 (see Figure 3). Finally, as introduced in Section 2.2, we have a hand-set typology parameter that allows us to split prefixes or suffixes if the language has an agglutinative morphology. Since Turkish has agglutinative suffixation, we set this parameter to split suffixes for Turkish.",
                "cite_spans": [
                    {
                        "start": 158,
                        "end": 181,
                        "text": "Cruetz and Lagus (2005)",
                        "ref_id": null
                    },
                    {
                        "start": 615,
                        "end": 638,
                        "text": "Cruetz and Lagus (2004)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 779,
                        "end": 785,
                        "text": "Figure",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parameters",
                "sec_num": "3.3"
            },
            {
                "text": "Development results were obtained by evaluating English and Turkish segmentations at several stages, and with several values of parameter b as shown in Figure 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 152,
                        "end": 160,
                        "text": "Figure 3",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Development Results",
                "sec_num": "3.4"
            },
            {
                "text": "Overall, our development results were very positive. For the surface-level evaluation, the largest F-score improvement was observed for English (Figure 3, Chart 1) , 63.75% to 68.99%, a relative F-score gain of 8.2% over the baseline segmentation. The Turkish result also improves to a similar degree, but it is only achieved after the model as been refined by splitting. For English we observe the improvement earlier, after the EM Stage. For the underlying-level evaluation, the largest F-score improvement was observed for Turkish (Chart 4), 31.37% to 54.86%, a relative F-score gain of over 74%.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 144,
                        "end": 163,
                        "text": "(Figure 3, Chart 1)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Development Results",
                "sec_num": "3.4"
            },
            {
                "text": "In most experiments with rules to generate underlying analyses (With Rules), the successive applications of EM and splitting result in improved results. Without rule-generated forms (No Rules) the results tend be negative compared to the baseline (see Figure 3, Chart 2), or mixed (Charts 1 and 4). When we look at recall and precision numbers directly, we observe that even without rules, the algorithm produces large recall boosts (especially after splitting). However, these boosts are accompanied by precision losses, which result in unchanged or lower F-scores.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 252,
                        "end": 258,
                        "text": "Figure",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Development Results",
                "sec_num": "3.4"
            },
            {
                "text": "The exception is the underlying-level evaluation of English segmentations (Figure 3, Chart 3) . Here we observe a near-parity of F-score gains for segmentations produced with and without underlying morphemes derived by rule. One explanation is that the English initial segmentation is conservative and that coverage gains are the main reason for improved English scores. Creutz and Lagus (2005) note that the Morfessor EM approach often has better coverage than the MAP approach we use to produce the is Morfessor MAP, which was used as a reference method in the contest. MC Top is the top contestant. For our hybrid approach, we show the F-score obtained with and without using rewrite rules. The splitting parameter b was set to the best performing value seen in development evaluations (Tr. b = 100, En. b = 500).",
                "cite_spans": [
                    {
                        "start": 371,
                        "end": 394,
                        "text": "Creutz and Lagus (2005)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 74,
                        "end": 93,
                        "text": "(Figure 3, Chart 3)",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Development Results",
                "sec_num": "3.4"
            },
            {
                "text": "initial segmentation. Also, in English, allomorphy is not as extensive as in Turkish (see Chart 4) where precision losses are greater without rules, i.e. when not representing allomorphs by the same morpheme.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Development Results",
                "sec_num": "3.4"
            },
            {
                "text": "Final test results, given in Table 2 , are mixed. For English, though we improve on our baseline and on Morfessor MAP trained by Creutz and Lagus, we are beaten by the top unsupervised Morpho Challenge contestant, entered by Delphine Bernhard (2007) . Bernhard's approach was purely unsupervised and did not explicitly account for allomorphic phenomena. There are several possible reasons why we were not the top performer here. Our splitting constraint for stems, which allows them to split into stems and chains of affixes, is suited for agglutinative morphologies. It does not seem particularly well suited to English morphology. Our rewrite-rules might also be improved. Finally, there may be other, more pressing barriers (besides allomorphy) to improving morpheme induction in English, like ambiguity between homographic morphemes. For Turkish, the story is very different. We observe our baseline segmentation going from 32.76% F-score to 54.54% when re-segmented using rules, a relative improvement of over 66%. Compared with the top unsupervised approach, Creutz and Lagus's Morfessor MAP, our F-score improvement is over 48%. The distance between our hybrid approach and unsupervised approaches emphasizes the problem allomorphy can be for a language like Turkish. Turkish inflectional suffixes, for instance, regularly undergo multiple spelling-rules and can have 10 or more variant forms. Knowing that these variants are all one morpheme makes a difference.",
                "cite_spans": [
                    {
                        "start": 234,
                        "end": 249,
                        "text": "Bernhard (2007)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 29,
                        "end": 36,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Final Test Results",
                "sec_num": "3.5"
            },
            {
                "text": "In this work we showed that we can use a small amount of knowledge in the form of context-sensitive rewrite rules to improve unsupervised segmentations for Turkish and English. This improvement can be quite large. On the morpheme-consistency measure used in the last Morpho Challenge, we observed an improvement of the Turkish segmentation of over 66% against the baseline, and 48% against the topof-the-line unsupervised approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "4"
            },
            {
                "text": "Work in progress includes error analysis of the results to more closely examine the contribution of each rule, as well as developing rule sets for additional languages. This will help highlight various aspects of the most beneficial rules.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "4"
            },
            {
                "text": "There has been recent work on discovering allomorphic phenomena automatically (Dasgupta and Ng, 2007; Demberg, 2007) . It is hoped that our work can inform these approaches, if only by showing what variation is possible, and what is relevant to particular languages. For example, variation in inflectional suffixes, driven by vowel harmony and other phenomena, should be captured for a language like Turkish.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 101,
                        "text": "(Dasgupta and Ng, 2007;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 102,
                        "end": 116,
                        "text": "Demberg, 2007)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "4"
            },
            {
                "text": "Future work involves attempting to learn broadcoverage underlying morphology without the handcoded element of the current work. This might involve employing aspects of the most beneficial rules as variable features in rule-templates. It is hoped that we can start to derive underlying morphemes through processes (rules, constraints, etc) suggested by these templates, and possibly learn instantiations of templates from seed corpora. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "4"
            },
            {
                "text": "In this work we focus on orthographic allomorphy.2 Ordered rewrite rules, when restricted from applying to their own output, have similar expressive capabilities to Koskenniemi's two-level constraints. Both define regular relations on strings, both can be compiled into lexical transducers, and both have been used in finite-state analyzers(Karttunen and Beesley, 2001). We choose ordered rules because they are easier to write given our task and resources.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Some special substitution rules, like vowel harmony in Turkish and Finnish, have a spreading effect, moving from syllable to syllable within and beyond morphboundaries. In our formulation, these rules differ from other rules by not being conditioned on a morphboundary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The main difference between our procedure andCreutz and Lagus (2004) is that we allow splitting into two or more morphemes (see Step 7) while they allow binary splits only.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://www.cis.hut.fi/morphochallenge2007/datasets.shtml",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Simple morpheme labeling in unsupervised morpheme analysis",
                "authors": [
                    {
                        "first": "Delphine",
                        "middle": [],
                        "last": "Bernhard",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Working Notes for the CLEF 2007 Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Delphine Bernhard. 2007. Simple morpheme label- ing in unsupervised morpheme analysis. In Work- ing Notes for the CLEF 2007 Workshop, Budapest, Hungary.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The revised wordframe model for the filipino language",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Charibeth",
                        "suffix": ""
                    },
                    {
                        "first": "Solomon",
                        "middle": [
                            "L"
                        ],
                        "last": "Cheng",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "See",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Journal of Research in Science, Computing and Engineering",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charibeth K. Cheng and Solomon L. See. 2006. The revised wordframe model for the filipino language. Journal of Research in Science, Computing and Engineering.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The Sound Pattern of English",
                "authors": [
                    {
                        "first": "Noam",
                        "middle": [],
                        "last": "Chomsky",
                        "suffix": ""
                    },
                    {
                        "first": "Morris",
                        "middle": [],
                        "last": "Halle",
                        "suffix": ""
                    }
                ],
                "year": 1968,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row, New York.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Unsupervised discovery of morphemes",
                "authors": [
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Krista",
                        "middle": [],
                        "last": "Lagus",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. Workshop on Morphological and Phonological Learning of ACL'02",
                "volume": "",
                "issue": "",
                "pages": "21--30",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsuper- vised discovery of morphemes. In Proc. Work- shop on Morphological and Phonological Learning of ACL'02, pages 21-30, Philadelphia. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Induction of a simple morphology for highly inflecting languages",
                "authors": [
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Krista",
                        "middle": [],
                        "last": "Lagus",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. 7th Meeting of the ACL Special Interest Group in Computational Phonology (SIG-PHON)",
                "volume": "",
                "issue": "",
                "pages": "43--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mathias Creutz and Krista Lagus. 2004. Induction of a simple morphology for highly inflecting lan- guages. In Proc. 7th Meeting of the ACL Special Interest Group in Computational Phonology (SIG- PHON), pages 43-51, Barcelona.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Inducing the morphological lexicon of a natural language from unannotated text",
                "authors": [
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Krista",
                        "middle": [],
                        "last": "Lagus",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR'05)",
                "volume": "",
                "issue": "",
                "pages": "106--113",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proc. International and Interdisciplinary Conference on Adaptive Knowl- edge Representation and Reasoning (AKRR'05), pages 106-113, Espoo, Finland.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Morfessor in the morpho challenge",
                "authors": [
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Krista",
                        "middle": [],
                        "last": "Lagus",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proc. PASCAL Chal- lenge Workshop on Unsupervised Segmentation of Words into Morphemes, Venice, Italy.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "High performance, language-independent morphological segmentation",
                "authors": [
                    {
                        "first": "Sajib",
                        "middle": [],
                        "last": "Dasgupta",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proc. NAACL'07",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sajib Dasgupta and Vincent Ng. 2007. High perfor- mance, language-independent morphological seg- mentation. In Proc. NAACL'07.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Unsupervised Language Acquisition",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Carl",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "De Marcken",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carl G. de Marcken. 1996. Unsupervised Language Acquisition. Ph.D. thesis, Massachussetts Insti- tute of Technology, Boston.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Inference of variable-length linguistic and acoustic units by multigrams",
                "authors": [
                    {
                        "first": "Sabine",
                        "middle": [],
                        "last": "Deligne",
                        "suffix": ""
                    },
                    {
                        "first": "Fr\u00e9d\u00e9ric",
                        "middle": [],
                        "last": "Bimbot",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Speech Communication",
                "volume": "23",
                "issue": "",
                "pages": "223--241",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sabine Deligne and Fr\u00e9d\u00e9ric Bimbot. 1997. Inference of variable-length linguistic and acoustic units by multigrams. Speech Communication, 23:223-241.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A language-independent unsupervised model for morphological segmentation",
                "authors": [
                    {
                        "first": "Vera",
                        "middle": [],
                        "last": "Demberg",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proc. ACL'07",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vera Demberg. 2007. A language-independent un- supervised model for morphological segmentation. In Proc. ACL'07.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Unsupervised learning of the morphology of a natural language",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Goldsmith",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Computational Linguistics",
                "volume": "27",
                "issue": "",
                "pages": "153--198",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27.2:153-198.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "A short history of two-level morphology",
                "authors": [
                    {
                        "first": "Lauri",
                        "middle": [],
                        "last": "Karttunen",
                        "suffix": ""
                    },
                    {
                        "first": "Kenneth",
                        "middle": [
                            "R"
                        ],
                        "last": "Beesley",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. ESSLLI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lauri Karttunen and Kenneth R. Beesley. 2001. A short history of two-level morphology. In Proc. ESSLLI 2001.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Unsupervised segmentation of words into morphemes -Morpho Challenge 2005, an introduction and evaluation report",
                "authors": [
                    {
                        "first": "Mikko",
                        "middle": [],
                        "last": "Kurimo",
                        "suffix": ""
                    },
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Matti",
                        "middle": [],
                        "last": "Varjokallio",
                        "suffix": ""
                    },
                    {
                        "first": "Ebru",
                        "middle": [],
                        "last": "Arisoy",
                        "suffix": ""
                    },
                    {
                        "first": "Murat",
                        "middle": [],
                        "last": "Sara\u00e7lar",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mikko Kurimo, Mathias Creutz, Matti Varjokallio, Ebru Arisoy, and Murat Sara\u00e7lar. 2006. Unsu- pervised segmentation of words into morphemes - Morpho Challenge 2005, an introduction and eval- uation report. In Proc. PASCAL Challenge Work- shop on Unsupervised Segmentation of Words into Morphemes, Venice, Italy.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Unsupervised morpheme analysis evaluation by a comparison to a linguistic gold standard -Morpho Challenge",
                "authors": [
                    {
                        "first": "Mikko",
                        "middle": [],
                        "last": "Kurimo",
                        "suffix": ""
                    },
                    {
                        "first": "Mathias",
                        "middle": [],
                        "last": "Creutz",
                        "suffix": ""
                    },
                    {
                        "first": "Matti",
                        "middle": [],
                        "last": "Varjokallio",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Working Notes for the CLEF 2007 Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mikko Kurimo, Mathias Creutz, and Matti Var- jokallio. 2007. Unsupervised morpheme analysis evaluation by a comparison to a linguistic gold standard -Morpho Challenge 2007. In Working Notes for the CLEF 2007 Workshop, Budapest, Hungary.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "A hierarchical em approach to word segmentation",
                "authors": [
                    {
                        "first": "Fuchun",
                        "middle": [],
                        "last": "Peng",
                        "suffix": ""
                    },
                    {
                        "first": "Dale",
                        "middle": [],
                        "last": "Schuurmans",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. 4th Intl. Conference on Intel. Data Analysis (IDA)",
                "volume": "",
                "issue": "",
                "pages": "238--247",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fuchun Peng and Dale Schuurmans. 2001. A hier- archical em approach to word segmentation. In Proc. 4th Intl. Conference on Intel. Data Analysis (IDA), pages 238-247.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Knowledge-free induction of morphology using latent semantic analysis",
                "authors": [],
                "year": null,
                "venue": "Proc. CoNLL'00 and LLL'00",
                "volume": "",
                "issue": "",
                "pages": "67--72",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Knowledge-free induction of morphology using la- tent semantic analysis. In Proc. CoNLL'00 and LLL'00, pages 67-72, Lisbon.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Knowledge-free induction of inflectional morphologies",
                "authors": [],
                "year": null,
                "venue": "Proc. NAACL'01",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Knowledge-free induction of inflectional morpholo- gies. In Proc. NAACL'01, Pittsburgh.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "A bayesian model for morpheme and paradigm identification",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Matthew",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "R"
                        ],
                        "last": "Snover",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Brent",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. ACL'01",
                "volume": "",
                "issue": "",
                "pages": "482--490",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew G. Snover and Michael R. Brent. 2001. A bayesian model for morpheme and paradigm identification. In Proc. ACL'01, pages 482-490, Toulouse, France.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Modeling and Learning Multilingual Inflectional Morphology in a Minimally Supervised Framework",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Wicentowski",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard Wicentowski. 2002. Modeling and Learn- ing Multilingual Inflectional Morphology in a Min- imally Supervised Framework. Ph.D. thesis, Johns Hopkins University, Baltimore, Maryland.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Minimally supervised morphological analysis by multimodal alignment",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Yarowsky",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Wicentowski",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proc. ACL'00",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proc. ACL'00.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Inducing multilingual text analysis tools via robust projection accross aligned corpora",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Yarowsky",
                        "suffix": ""
                    },
                    {
                        "first": "Grace",
                        "middle": [],
                        "last": "Ngai",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Wicentowski",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. HLT'01",
                "volume": "01",
                "issue": "",
                "pages": "161--168",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection accross aligned corpora. In Proc. HLT'01, volume HLT 01, pages 161-168, San Diego.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF2": {
                "num": null,
                "text": "Flowchart showing the entire procedure.",
                "type_str": "figure",
                "uris": null
            },
            "FIGREF4": {
                "num": null,
                "text": "Development results for the preprocessed initial segmentation (Baseline), and segmentations produced by our approach, first after the EM Stage (EM) and again after the Split Stage (SPL) with different values of parameter b. Rules that generate underlying analyses have either been included (With Rules), or left out (No Rules).",
                "type_str": "figure",
                "uris": null
            },
            "TABREF0": {
                "content": "<table><tr><td>Tags Surface Segmentation</td><td>seat + s STM SUF</td><td>citi + es STM SUF</td><td>STM glass + es SUF</td><td>Features: VWL = vowel</td></tr><tr><td>Applicable Rule(s)</td><td/><td>\u00f8\u2192e / [+VWL] + _s y\u2192i / _ + [+ANY]</td><td>\u00f8\u2192e / [+SIB] + _s</td><td>ANY = any char. SIB = sibilant</td></tr><tr><td>Underlying Analyses</td><td>seat + s</td><td>city + s</td><td>glass + s</td><td>{s,sh,ch,...}</td></tr></table>",
                "num": null,
                "text": "When an underlying morpheme u i is associated to a surface morph s, we refer to s as an allomorph of",
                "type_str": "table",
                "html": null
            },
            "TABREF1": {
                "content": "<table><tr><td>Development Data The development gold-</td></tr><tr><td>standard for the surface metric was provided in</td></tr><tr><td>advance of Morpho Challenge '05 and consists of</td></tr><tr><td>surface segmentations for 532 English and 774</td></tr><tr><td>Turkish words.</td></tr><tr><td>The development gold-standard for the underlying</td></tr><tr><td>metric was provided in advance of Morpho Challenge</td></tr><tr><td>'07 and consists of morphological analyses for 410</td></tr><tr><td>English and 593 Turkish words.</td></tr></table>",
                "num": null,
                "text": "Training corpus sizes vary slightly, with 3 million English sentences and 1 million Turkish sentences.",
                "type_str": "table",
                "html": null
            },
            "TABREF2": {
                "content": "<table><tr><td>English</td><td>47.17</td><td>60.81</td><td>47.04</td><td>57.35</td><td>59.78</td></tr><tr><td>Turkish</td><td>37.10</td><td>29.23</td><td>32.76</td><td>31.10</td><td>54.54</td></tr></table>",
                "num": null,
                "text": "Hybrid:After Split MC Morf. MC Top Baseline No Rules With Rules",
                "type_str": "table",
                "html": null
            },
            "TABREF3": {
                "content": "<table/>",
                "num": null,
                "text": "Final test F-scores on the underlying morpheme measure used in Morpho Challenge '07. MC Morf.",
                "type_str": "table",
                "html": null
            },
            "TABREF4": {
                "content": "<table/>",
                "num": null,
                "text": "A.1 Rules Used For English e epenthesis before s suffix \u00f8 \u2192e / ..[+V] + _s \u00f8\u2192e / ..[+SIB] + _s long e deletion e \u2192\u00f8 / ..[+V][+C]_ + [+V] change y to i before suffix y \u2192i / ..[+C] +? _ + [+ANY] consonant gemination \u00f8 \u2192\u03b1[+STOP] / ..\u03b1[+STOP]_ + [+V] \u00f8 \u2192\u03b1[+STOP] / ..\u03b1[+STOP]_ + [+GLI]",
                "type_str": "table",
                "html": null
            },
            "TABREF5": {
                "content": "<table><tr><td>Base</td><td>EM</td><td colspan=\"2\">SPL:b=300 SPL:b=500</td></tr><tr><td>happen s</td><td>happen s</td><td>happ e n s</td><td>happen s</td></tr><tr><td>happier</td><td>happier</td><td>happi er</td><td>happi er</td></tr><tr><td>happiest</td><td>happiest</td><td>happ i est</td><td>happiest</td></tr><tr><td>happily</td><td>happily</td><td>happi ly</td><td>happi ly</td></tr><tr><td colspan=\"3\">happiness happiness happi ness</td><td>happiness</td></tr></table>",
                "num": null,
                "text": "English RulesA.2 Example Segmentations",
                "type_str": "table",
                "html": null
            },
            "TABREF6": {
                "content": "<table/>",
                "num": null,
                "text": "Surface segmentations after preprocessing (Base), EM Stage (EM), and Split Stage (SPL)",
                "type_str": "table",
                "html": null
            }
        }
    }
}