File size: 105,053 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
{
    "paper_id": "O12-2003",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:03:09.807413Z"
    },
    "title": "The Polysemy Problem, an Important Issue in a Chinese to Taiwanese TTS System",
    "authors": [
        {
            "first": "Ming-Shing",
            "middle": [],
            "last": "Yu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Chung-Hsing University",
                "location": {
                    "postCode": "40227",
                    "settlement": "Taichung",
                    "country": "Taiwan"
                }
            },
            "email": ""
        },
        {
            "first": "Yih-Jeng",
            "middle": [],
            "last": "Lin",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Chien-Kuo Technology University",
                "location": {
                    "addrLine": "Chang-hua 500",
                    "country": "Taiwan"
                }
            },
            "email": "yclin@ctu.edu.tw"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper brings up an important issue, polysemy problems, in a Chinese to Taiwanese TTS (text-to-speech) system. Polysemy means there are words with more than one meaning or pronunciation, such as \"\u6211\u5011\" (we), \"\uf967\" (no), \"\u4f60\" (you), \"\u6211\" (I), and \"\u8981\" (want). We first will show the importance of the polysemy problem in a Chinese to Taiwanese (C2T) TTS system. Then, we will propose some approaches to a difficult case of such problems by determining the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. There are two pronunciations of the word \"\u6211\u5011\" (we) in Taiwanese, /ghun/ and /lan/. The corresponding Chinese words are \"\uf9c6\" (we 1) and \"\u54b1\" (we 2). We propose two approaches and a combination of the two to solve the problem. The results show that we have a 93.1% precision in finding the correct pronunciation of the word \"\u6211\u5011\" (we). Compared to the results of the layered approach, which has been shown to work well in solving other polysemy problems, the results of the combined approach are an improvement.",
    "pdf_parse": {
        "paper_id": "O12-2003",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper brings up an important issue, polysemy problems, in a Chinese to Taiwanese TTS (text-to-speech) system. Polysemy means there are words with more than one meaning or pronunciation, such as \"\u6211\u5011\" (we), \"\uf967\" (no), \"\u4f60\" (you), \"\u6211\" (I), and \"\u8981\" (want). We first will show the importance of the polysemy problem in a Chinese to Taiwanese (C2T) TTS system. Then, we will propose some approaches to a difficult case of such problems by determining the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. There are two pronunciations of the word \"\u6211\u5011\" (we) in Taiwanese, /ghun/ and /lan/. The corresponding Chinese words are \"\uf9c6\" (we 1) and \"\u54b1\" (we 2). We propose two approaches and a combination of the two to solve the problem. The results show that we have a 93.1% precision in finding the correct pronunciation of the word \"\u6211\u5011\" (we). Compared to the results of the layered approach, which has been shown to work well in solving other polysemy problems, the results of the combined approach are an improvement.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Besides Mandarin, Taiwanese is the most widely spoken dialect in Taiwan. According to Liang et al. (2004) , about 75% of the population in Taiwan speaks Taiwanese. Currently, it is government policy to encourage people to learn one's mother tongue in schools because local languages are a part of local culture.",
                "cite_spans": [
                    {
                        "start": 86,
                        "end": 105,
                        "text": "Liang et al. (2004)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Researchers (Bao et al., 2002; Chen et al., 1996; Lin et al., 1998; Lu, 2002; Shih et al., 1996; Wu et al., 2007; Yu et al., 2005) have had outstanding results in developing Mandarin Figure 1 shows a common structure of a C2T TTS system. In general, a C2T TTS system should contain four basic modules. They are (1) a text analysis module, (2) a tone sandhi module, (3) a prosody generation module, and (4) a speech synthesis module. A C2T TTS system also needs a text analysis module like that of a Mandarin TTS system. This module requires a well-defined bilingual lexicon. We also find that text analysis in a C2T TTS system should have functions not found in a Mandarin TTS system, such as phonetic transcription, digit sequence processing (Liang et al., 2004) , and a method for solving the polysemy problem. Solving the polysemy problem is the most complex and difficult of these. There has been little research on solving the polysemy problem. Polysemy means that a word has two or more meanings, which may lead to different pronunciations. For example, the word \"\u4ed6\" (he) has two pronunciations in Taiwanese, /yi/ and /yin/. The first pronunciation /yi/ of \"\u4ed6\" (he) means \"he,\" while the second pronunciation /yin/ of \"\u4ed6\" (he) means \"second-person possessive\". The correct pronunciation of a word affects the comprehensibility and fluency of Taiwanese speech.",
                "cite_spans": [
                    {
                        "start": 12,
                        "end": 30,
                        "text": "(Bao et al., 2002;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 31,
                        "end": 49,
                        "text": "Chen et al., 1996;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 50,
                        "end": 67,
                        "text": "Lin et al., 1998;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 68,
                        "end": 77,
                        "text": "Lu, 2002;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 78,
                        "end": 96,
                        "text": "Shih et al., 1996;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 97,
                        "end": 113,
                        "text": "Wu et al., 2007;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 114,
                        "end": 130,
                        "text": "Yu et al., 2005)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 743,
                        "end": 763,
                        "text": "(Liang et al., 2004)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 183,
                        "end": 191,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Many researchers have studied C2T TTS systems (Ho, 2000; Huang, 2001; Hwang, 1996; Lin et al., 1999; Pan, Yu, & Tsai, 2008; Yang, 1999; Zhong, 1999) . Nevertheless, none of the researchers considered the polysemy problem in a C2T TTS system. We think that solving the polysemy problem in a C2T TTS system is a fundamental task. The correct meaning of the synthesized words cannot be determined if this problem is not solved properly. The remainder of this paper is organized as follows. In Section 2, we will describe the polysemy problem in Taiwanese. We will give examples to show the importance of solving the polysemy problem in a C2T TTS system. Determining the correct pronunciation of the word \"\u6211\u5011\" (we) is the focus of the challenge in these cases. Section 3 is the description of the layered approach, which has been shown to work well in solving the polysemy problem (Lin et al., 2008) . Lin (2006) has also shown that the layered approach works very well in solving the polyphone problem in Chinese. We will apply the layered approach in determining the pronunciation of \"\u6211\u5011\" (we) in this section. In Section 4 and Section 5, we use two models to determine the pronunciation of the word \"\u6211\u5011\" (we) in sentences. The first approach in Section 4 is called the word-based unigram model (WU). The second approach, which will be applied in Section 5, is the word-based long-distance bigram model (WLDB). We also make some new inferences in these two sections. Section 6 shows a combination of the two models discussed in Section 4 and Second 5 for a third approach to solving the polysemy problem. Finally, in Section 7, we summarize our major findings and outline some future works.",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 56,
                        "text": "(Ho, 2000;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 57,
                        "end": 69,
                        "text": "Huang, 2001;",
                        "ref_id": null
                    },
                    {
                        "start": 70,
                        "end": 82,
                        "text": "Hwang, 1996;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 83,
                        "end": 100,
                        "text": "Lin et al., 1999;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 101,
                        "end": 123,
                        "text": "Pan, Yu, & Tsai, 2008;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 124,
                        "end": 135,
                        "text": "Yang, 1999;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 136,
                        "end": 148,
                        "text": "Zhong, 1999)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 877,
                        "end": 895,
                        "text": "(Lin et al., 2008)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 898,
                        "end": 908,
                        "text": "Lin (2006)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Unlike in Chinese, the polysemy problem in Taiwanese appears frequently and is complex. We will give some examples to show the importance of solving the polysemy problem in a C2T TTS system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polysemy Problems in Taiwanese",
                "sec_num": "2."
            },
            {
                "text": "The first examples feature the pronouns \"\u4f60\" (you), \"\u6211\" (I), and \"\u4ed6\" (he) in Taiwanese. These three pronouns have two pronunciations, each of which corresponds to a different meaning. Example 2.1 shows the pronunciations of the word \"\u6211\" (I) and \"\u4f60\" (you) in Taiwanese. The two pronunciations of \"\u6211\" (I) are /ghua/ with the meaning of \"I\" or \"me\" and /ghun/ with the meaning of \"my\". The two pronunciations of \"\u4f60\" (you) are /li/ with the meaning of \"you\" and /lin/ with the meaning of \"your\". If one chooses the wrong pronunciation, the utterance will carry the wrong meaning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polysemy Problems in Taiwanese",
                "sec_num": "2."
            },
            {
                "text": "\u6211/ghua/\u904e\u4e00\u6703\u5152\u6703\u62ff\u5e7e\u672c\u6709\u95dc\u53f0\u8a9e\u6587\u5316\u7684\u66f8\u5230\u4f60/lin/\u5bb6\u7d66\u4f60/li/\uff0c\u4f60/li/\u53ef\u4ee5 \uf967\u5fc5\u5230\u6211/ghun/\u5bb6\uf92d\u627e\u6211/ghua/\u62ff\u3002 (I will bring some books about Taiwanese culture to your house for you later; you need not come to my home to get them from me.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.1",
                "sec_num": null
            },
            {
                "text": "Example 2.2 shows the two different pronunciations of \"\u4ed6\" (he). They are /yi/, with the meaning of \"he\" or \"him,\" and /yin/, with the meaning of \"his\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.1",
                "sec_num": null
            },
            {
                "text": "\u6211\u770b\u5230\u4ed6/yi/\u62ff\u4e00\u76c6\uf91f\u82b1\u56de\u4ed6/yin/\u5bb6\u7d66\u4ed6/yin/\u7238\u7238\u3002 (I saw him bring an orchid back to his home for his father.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.2",
                "sec_num": null
            },
            {
                "text": "The following examples focus on \"\uf967\" (no), which has six different pronunciations. They are /bho/, /m/, /bhei/, /bhuaih/, /mai/, and /but/. Examples 2.3 through 2.6 show four of the six pronunciations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.2",
                "sec_num": null
            },
            {
                "text": "\u4e00\u822c\u4eba\u4e26\uf967/bho/\u5bb9\uf9e0\u770b\u51fa\u5b83\u7684\u91cd\u8981\u6027\u3002 (It is not easy for a person to see its importance.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "Example 2.4 \uf967/m/\u77e5\uf92a\u8cbb\uf9ba\u591a\u5c11\u570b\u5bb6\u8cc7\u6e90\u3002 (We do not know how many national resources were wasted.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "Example 2.5 \u8b93\u4eba\uf997\u60f3\uf967/bhei/\u5230\u4ed6\u8207\u6a5f\u68b0\u7684\u95dc\u4fc2\u3002 (One would not come to the proper conclusion regarding the relationship between that person and machines.) Example 2.6 \u83ef\u822a\u4f7f\u7528\u4e4b\u822a\u7a7a\u7ad9\u4ea4\u901a\u5df2\uf967/but/\u5982\u5f9e\u524d\u65b9\uf965\u3002 (The traffic at the airport is not as convenient as it was in the past for China Airlines.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "Examples 2.7 through 2.9 are examples of pronunciations of the word \"\u4e0a\" (up). The word \"\u4e0a\" (up) has three pronunciations. They are /ding/, /siong/, and /jiunn/. The meaning of the word \"\u4e0a\" (up) in Example 2.7 has the sense of \"previous\". Example 2.8 shows a case where \"\u4e0a\" (up) means \"on\". Example 2.9 is an example of the use of \"\u4e0a\" (up) to mean, \"get on\". Another word we want to discuss is \"\u4e0b\" (down). The word \"\u4e0b\" (down) has four pronunciations. They are /ha/, /ao/, /loh/, and /ei/. Examples 2.10-2.13 are some examples of pronunciations of the word \"\u4e0b\" (down). The meaning of \"\u4e0b\" (down) in Example 2.10 is \"close\" or \"end\". Example 2.11 shows how the same word can mean \"next\". Example 2.12 illustrates the meaning \"falling\". Example 2.13 shows another example of it used to mean \"next\". We have proposed a layered approach in predicting the pronunciations \"\u4e0a\" (up), \"\u4e0b\" (down), and \"\uf967\" (no) (Lin et al., 2008) . The layered approach works very well in solving the polysemy problems in a C2T TTS system. A more difficult case of the polysemy problem will be encountered in this paper.",
                "cite_spans": [
                    {
                        "start": 898,
                        "end": 916,
                        "text": "(Lin et al., 2008)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "In addition to the above words, another difficult case is \"\u6211\u5011\" (we). Taiwanese speakers arrive at the correct pronunciation of the word \"\u6211\u5011\" (we) by deciding whether to include the listener in the pronoun. Unlike Chinese, \"\u6211\u5011\" (we) has two pronunciations with different meanings when used in Taiwanese. This word can include (1) both the speaker and listener(s) or (2) just the speaker. These variations lead to two different pronunciations in Taiwanese, /lan/ and /ghun/. The Chinese characters for /lan/ and /ghun/ are \"\u54b1\" (we) and \"\uf9c6\" (we), respectively. The following example helps to illustrate the different meanings. More examples to illustrate these differences will be used later in this section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "Assume first that Jeffrey and his younger brother, Jimmy, ask their father to take them to see a movie then go shopping. Jeffrey can say the following to his father: Example 2.14 \u7238\u7238\u4f60\u8981\u8a18\u5f97\u5e36\u6211\u5011\u4e00\u8d77\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u518d\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, remember to take us to see a movie and go shopping with us after we see the movie.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "The pronunciation of the first word \"\u6211\u5011\" (we) in Example 2.14 is /ghun/ in Taiwanese since the word \"\u6211\u5011\" (we) does not include the listener, Jeffrey's father. The second instance of \"\u6211\u5011\" (we), however, is pronounced /lan/ since this instance includes both the speaker and the listener.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "The pronunciation of \"\u6211\u5011\" (we) in Example 2.15 is /ghun/ in Taiwanese since the word \"\u6211\u5011\" (we) includes Jeffrey and Jimmy but does not include the listener, Jeffrey's father.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.3",
                "sec_num": null
            },
            {
                "text": "will go to see a movie with my younger brother, and the two of us will go shopping after seeing the movie.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "If a C2T TTS system cannot identify the correct pronunciation of the word \"\u6211\u5011\" (we), we cannot understand what the synthesized Taiwanese speech means. In a C2T TTS system, it is necessary to decide the correct pronunciation of the Chinese word \"\u6211\u5011\" (we) in order to have a clear understanding of synthesized Taiwanese speech.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "Distinguishing different kinds of meanings of \"\u6211\u5011\" (we) is a semantic problem. It is a difficult but important issue to be overcome in the text analysis module of a C2T TTS system. As there is only one pronunciation of \"\u6211\u5011\" (we) in Mandarin, a Mandarin TTS system does not need to identify the meaning of the word \"\u6211\u5011\" (we).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "To compare this work with the research in Hwang et al. (2000) and Yu et al. (2003) , determining the meaning of the word \"\u6211\u5011\" (we) may be more difficult than solving the non-text symbol problem. A person can determine the relationship between the listeners and the speaker then determine the meaning of the word \"\u6211\u5011\" (we). It is more difficult, however, for a computer to recognize the relationship between the listeners and speakers in a sentence.",
                "cite_spans": [
                    {
                        "start": 42,
                        "end": 61,
                        "text": "Hwang et al. (2000)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 66,
                        "end": 82,
                        "text": "Yu et al. (2003)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "Since determining whether listeners are included is a context-sensitive problem, we need to look at the surrounding words, sentences, or paragraphs to find the answer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "Let us examine the following Chinese sentence (Example 2.16) to help clarify the problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "Example 2.16 \u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a\uf9fa\u6cc1\u3002 (We should press forward to improve the traffic of Taipei City.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "It is difficult to determine the Taiwanese pronunciation of the word \"\u6211\u5011\" (we) in Example 2.16 from the information in this sentence. To get the correct pronunciation of the word \"\u6211\u5011\" (we), we need to expand the sentence by adding words to the subject, i.e., look forward, and predicate, i.e., look backward. Assume that, when we add words to the subject and the predicate, we have a sentence that looks like Example 2.17: As the reporters from the USA have no obligation to improve the traffic of Taipei, we can conclude that \"\u6211\u5011\" (we) does not include them. Therefore, it is safe to say that the correct pronunciation of the word \"\u6211\u5011\" (we) in Example 2.17 should be /ghun/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "Example",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "On the other hand, if the sentence reads as in Example 2.18 and context is included, the pronunciation of the word \"\u6211\u5011\" (we) should be /lan/. We can find some important keywords such as \"\u53f0\uf963\u5e02\u9577\" (the Taipei city mayor) and \"\u5e02\u5e9c\u6703\u8b70\" (a meeting of the city government).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I",
                "sec_num": null
            },
            {
                "text": "\uf9fa\u6cc1\u3002\u300d (In a meeting of the city government, the Taipei city mayor, Ma",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a",
                "sec_num": null
            },
            {
                "text": "Ying-Jeou, said that we should press forward to improve the traffic of Taipei City.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a",
                "sec_num": null
            },
            {
                "text": "When disambiguating the meaning of some non-text symbols, such as \"/\", \":\", and \"-\" the keywords to decide the pronunciation of the special symbols may be within a fixed distance from the given symbol. Nevertheless, the keywords can be at any distance from the word \"\u6211\u5011\" (we), as per Example 2.19. Some words that could be used to determine the pronunciation of \"\u6211\u5011\" (we), such as \"\u5e02\u5e9c\u6703\u8b70\" (a meeting of the city government), \"\u53f0\uf963 \u5e02\u9577\" (the Taipei city mayor), and \"\u99ac\u82f1\u4e5d\" (Ma Ying-Jeou), are at various distances from \"\u6211\u5011\" (we).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a",
                "sec_num": null
            },
            {
                "text": "\u5e02\u9577\uf96f: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a\uf9fa\u6cc1\u3002\u300d (In a meeting of the city government, the Taipei city mayor, Ma Ying-Jeou, talked about the problem of the traffic in Taipei city. Mayor Ma said that we should press forward to improve the traffic of Taipei city.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.19 \u5728\u4eca\u5929\u7684\u5e02\u5e9c\u6703\u8b70\u4e2d\uff0c\u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u63d0\u5230\u95dc\u65bc\u53f0\uf963\u5e02\u7684\u4ea4\u901a\u554f\u984c\u6642\uff0c\u99ac",
                "sec_num": null
            },
            {
                "text": "These examples illustrate the importance of determining the proper pronunciation for each word in a C2T TTS system. Compared to other cases of polysemy, determining the proper pronunciation of the word \"\u6211\u5011\" (we) in Taiwanese is a difficult task. We will focus on solving the polysemy problem of the word \"\u6211\u5011\" (we) in this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example 2.19 \u5728\u4eca\u5929\u7684\u5e02\u5e9c\u6703\u8b70\u4e2d\uff0c\u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u63d0\u5230\u95dc\u65bc\u53f0\uf963\u5e02\u7684\u4ea4\u901a\u554f\u984c\u6642\uff0c\u99ac",
                "sec_num": null
            },
            {
                "text": "(we) Lin (2006) showed that the layered approach worked very well in solving the polyphone problem in Chinese. Lin (2006) also showed that using the layered approach to solve the polyphone problem is more accurate than using the CART decision tree. We also show that using the layered approach in solving the polysemy problems of other words has worked well in our research (Lin et al., 2008) . We will apply the layered approach in solving the polysemy problem of \"\u6211\u5011\" (we) in Taiwanese.",
                "cite_spans": [
                    {
                        "start": 5,
                        "end": 15,
                        "text": "Lin (2006)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 111,
                        "end": 121,
                        "text": "Lin (2006)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 374,
                        "end": 392,
                        "text": "(Lin et al., 2008)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using the Layered Approach to Determine the Pronunciation of \"\u6211\u5011\"",
                "sec_num": "3."
            },
            {
                "text": "First, we will describe the experimental data used in this paper. The experimental data is comprised of over forty thousand news items from eight news categories, in which 1,546 articles contain the word \"\u6211\u5011\" (we). The data was downloaded from the Internet from August 23, 2003 to October 21, 2004. The distribution of these articles is shown in Table 1 . We determined the pronunciation of each \"\u6211\u5011\" (we) manually. As shown in Table 2 , in the 1,546 news articles, \"\u6211\u5011\" occurred 3,195 times. In our experiment, 2,556 samples were randomly chosen for the training data while the other 639 samples were added to the test data. In the training data, there were 1,916 instances with the pronunciation of /ghun/ for the Chinese character \" \uf9c6 \" and 640 instances with the pronunciation of /lan/ for the Chinese character \"\u54b1\". Figure 2 shows the layered approach to the polysemy problem with an input test sentence. We use Example 3.1 to illustrate how the layered approach works.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 346,
                        "end": 353,
                        "text": "Table 1",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 428,
                        "end": 435,
                        "text": "Table 2",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 821,
                        "end": 829,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Description of Experimental Data",
                "sec_num": "3.1"
            },
            {
                "text": "Example 3.1 \u7238\u7238 \u544a\u8a34 \u6211\u5011 \u904e \u99ac\uf937 \u8981 \u5c0f\u5fc3\u3002 (Dad told us to be careful when crossing the street.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "Example 3.1 is an utterance in Chinese with segmentation information. Spaces were used to separate the words in Example 3.1. We want to predict the correct pronunciation for the word \"\u6211\u5011\" (we) in Example 3.1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "As depicted in Figure 2 , there are four layers in our approach. We set ( 2   1 0 1 2 , , , , w w w w w \u2212 \u2212 + + ) as (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e,\u99ac\uf937). This pattern (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e,\u99ac\uf937) will be the input for Layer 4. Nevertheless, as this pattern is not found in the training data, we cannot decide the pronunciation of \"\u6211\u5011\" (we) with this pattern. We then use two patterns",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 15,
                        "end": 23,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 72,
                        "end": 81,
                        "text": "( 2   1 0",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "( 2 1 0 1 , , , w w w w \u2212 \u2212 + ) and ( 1 0 1 2 , , , w w w w \u2212 + + )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "to derive (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e) and (\u544a\u8a34,\u6211\u5011,\u904e, \u99ac\uf937), respectively, as the inputs for Layer 3. Since we cannot find any patterns in the training data that match either of these patterns, the pronunciation cannot be decided in this layer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "Three patterns are used in Layer 2. They are (\u7238\u7238,\u544a\u8a34,\u6211\u5011), (\u544a\u8a34,\u6211\u5011,\u904e), and (\u6211 \u5011,\u904e,\u99ac\uf937). We find that the pattern (\u7238\u7238,\u544a\u8a34,\u6211\u5011) has appeared in training data. The frequencies are 2 for pronunciation /ghun/ and 1 for /lan/. Thus, the probabilities for the possible pronunciations of \"\u6211\u5011\" (we) in Example 3.1 are 2/3 for /ghun/ and 1/3 for /lan/. We can conclude that the predicted pronunciation is /ghun/. The layered approach terminates in Layer 2 in this example. If the process did not terminate prematurely, as in this example, it would have terminated in Layer 1, as shown by the dashed lines in Figure 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 592,
                        "end": 600,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Description of Layered Approach",
                "sec_num": "3.2"
            },
            {
                "text": "We used the experimental data mentioned in 3.1. There are 3,159 samples in the corpus. We used 2,556 samples to train the four layers. The other 639 samples form the test data. Table 3 shows the accuracy of using the layered approach based on word patterns. Thus, the features in the layered approach are words. The results show that the layered approach does not work well. The overall accuracy is 77.00%. No pattern found, go to the next layer. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 177,
                        "end": 184,
                        "text": "Table 3",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Results of Using the Layered Approach",
                "sec_num": "3.3"
            },
            {
                "text": "/ghun/=0 /lan/=0 (\u7238\u7238,\u544a\u8a34) (\u544a\u8a34,\u6211\u5011) (\u6211\u5011,\u904e) (\u904e,\u99ac\uf937) \uff0b \uff0b \uff0b /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=2 /lan/=1 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results of Using the Layered Approach",
                "sec_num": "3.3"
            },
            {
                "text": "In this section, we propose a word-based unigram language model (WU). Two statistical results are needed in this model. Statistical results were compiled for (1) the frequency of appearance for words that appear to the left of \"\u6211\u5011\" (we) in the training data and (2) the frequencies for words that appear to the right. Each punctuation mark was treated as a word. Each testing sample looks like the following:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "where w -i is the i th word to the left of \"\u6211\u5011\" (we) and w i is the i th word to the right. The following formulae were used to find four different scores for each testing sample:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "S uL (/lan/), S uR (/lan/), S uL (/ghun/), and S uR (/ghun/). 1 (/ / & ) (/ /) / / (/ / & ) (/ / & ) (/ /) (/ /) j M uL uL j j j uL uL C lan w T lan S ( lan ) C lan w C ghun w T lan T ghun \u2212 \u2212 \u2212 = = + \u2211 (1) 1 (/ / & ) (/ /) (/ /) (/ / & ) (/ / & ) (/ /) (/ /) j N uR uR j j j uR uR C lan w T lan S lan C lan w C ghun w T lan T ghun + + + = = + \u2211 (2) 1 (/ / & ) (/ /) (/ /) (/ / & ) (/ / & ) (/ /) (/ /) j M uL uL j j j uL uL C ghun w T ghun S ghun C lan w C ghun w T lan T ghun \u2212 \u2212 \u2212 = = + \u2211 (3) 1 (/ / & ) (/ /) / / (/ / & ) (/ / & ) (/ /) (/ /) j N uR uR j j j uR uR C ghun w T ghun S ( ghun ) C lan w C ghun w T lan T ghun + + + = = + \u2211 (4) where 1 (/ /) (/ / & ) uL uL l l T lan C lan w \u2212 = = \u2211 (5) 1 (/ /) (/ / & ) uL uL p p T ghun C ghun w \u2212 = = \u2211 (6) 1 (/ /) (/ / & ) uR uR l l T lan C lan w + = = \u2211 (7) 1 (/ /) (/ / & ) uR uR p p T ghun C ghun w + = = \u2211",
                        "eq_num": "(8)"
                    }
                ],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "uL different kinds of words appear on the left side of \"\u6211\u5011\" (we) in the training corpus. T uL (/lan/) is the total frequency of these uL words in the training data where the pronunciation of \"\u6211\u5011\" (we) is /lan/. Similarly, T uL (/ghun/) represents the total frequency of uL words where \"\u6211\u5011\" (we) is pronounced /ghun/. uR is the number of different words that appear to the right side of \"\u6211\u5011\" (we) in the training corpus. T uR (/lan/) and T uR (/ghun/) are the total frequencies of these uR words in the training data where pronunciation of \"\u6211\u5011\" (we) is /lan/ and /ghun/, respectively. C(/ghun/&w p ) is the frequency that the word w p appears in the training corpus where the pronunciation of \"\u6211\u5011\" (we) is /ghun /.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "(/ / & ) (/ /) j uL C lan w T lan \u2212",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "in (1) means the significance of pronunciation /lan/ of word w -j in training data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "Formulae (1) through (4) were applied to each test sample to produce four scores. The scores were S uL (/lan/) for the words to the left of \"\u6211\u5011\" (we) when the pronunciation was /lan/, S uR (/lan/) for the words to the right when the pronunciation was /lan/, S uL (/ghun/) for the words to the left of \"\u6211\u5011\" (we) when the pronunciation was /ghun/, and S uR (/ghun/) for the words to the right when the pronunciation was /ghun/. The pronunciation of \"\u6211\u5011\" (we) is /lan/ if S uL (/lan/)+ S uR (/lan/) > S uL (/ghun/) + S uR (/ghun/). The result is /ghun/ otherwise. The experiments were inside and outside tests. First, we applied WU with the training data mentioned in Section 3.1 to find the best ranges in determining the pronunciation of \"\u6211 \u5011\" (we). We defined a window as (M, N), where M was number of words to the left of \"\u6211\u5011\" (we) and N was the number of words to the right. Three hundred and ninety nine (20*20-1=399) different windows were applied when using the WU model. As shown in Table  4 , the best result from an inside test was 87.00%, with a window of (17, 10).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 989,
                        "end": 997,
                        "text": "Table  4",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "The best result when the correct pronunciation of \"\u6211\u5011\" (we) was /ghun/ was 94.01%, achieved when the window was (12, 6). Nevertheless, the results when the pronunciation was /lan/ and the window was the same were not good. The highest accuracy achieved was 45.48%. Also, as shown in 4 th row of Table 4 , the best result when applying WU when the pronunciation was /lan/ was just 77.88%, when the window was (19, 14) . This shows that WU did not work well when the pronunciation of \"\u6211\u5011\" (we) was /lan/. We applied WU with a window of (17, 10) for testing data. The overall accuracy of the outside tests was 75.59%. The accuracies were 90.40% and 31.25% when the pronunciations were /ghun/ and /lan/, respectively. ",
                "cite_spans": [
                    {
                        "start": 408,
                        "end": 412,
                        "text": "(19,",
                        "ref_id": null
                    },
                    {
                        "start": 413,
                        "end": 416,
                        "text": "14)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 295,
                        "end": 302,
                        "text": "Table 4",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "Word-based Unigram Language Model",
                "sec_num": "4."
            },
            {
                "text": "We will bring up the word-based long-distance bigram language model (WLDB) in this section. According to Section 2 of this paper, there are two different meanings for \"\u6211\u5011\" (we). The two meanings are different in that one includes the listener(s) and the other does not. We propose a modification of the WU model by having two words appear together in the text to clarify the relationship between the speaker and listener(s). Examples of this modification are \"\u53f0\uf963\u5e02\u9577\" (the Taipei city mayor) and \"\u7f8e\u570b\u8a18\u8005\" (the reporter(s) from the USA) in Example 2.17 and \"\u53f0\uf963\u5e02\u9577\" and \"\u5e02\u5e9c\u6703\u8b70\" (a city government meeting) in Examples 2.18 and 2.19.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word-based Long Distance Bigram Language Model",
                "sec_num": "5."
            },
            {
                "text": "The following formulae were used to find four scores for each testing sample, S bL (/lan/), S bR (/lan/), S bL (/ghun/), and S bR (/ghun/). We assume that bL different words appear to the left of \"\u6211\u5011\" (we) in the training corpus and bR different words appear to the right. Formulae 9, 10, 11, and 12 were applied to each test sample, and they produced four scores. C(/lan/&w i &w j ) in (9) is the frequency at which words w i and w j appear in the training corpus when the pronunciation of \"\u6211\u5011\" (we) is /lan/. S bL (/lan/) is the score for the words to the left of \"\u6211\u5011\" (we) when the pronunciation is /lan/, and S bR (/lan/) is the score for the words to the right. Similarly, S bL (/ghun/) and S bR (/ghun/) represent the scores for the words to the left and right, respectively, when \"\u6211\u5011\" (we) is pronounced /ghun/. In summary, the pronunciation of the word \"\u6211\u5011\" (we) is /lan/ if S bL (/lan/) + S bR (/lan/) > S bL (/ghun/) + S bR (/ghun/). The pronunciation is /ghun/ otherwise.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .",
                "sec_num": null
            },
            {
                "text": "1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j M M bL bL i j i j i j i bL bL C lan w w T lan S lan C lan w w C ghun w w T C lan T C ghun \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = = = + \u2211 \u2211 (9) 1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j N N bR bR i j i j i j i bR bR C lan w w T lan S lan C lan w w C ghun w w T lan T ghun + + + + = = = + \u2211 \u2211 (10) 1 (/ / & & ) (/ /) (/ /) (/ /& & ) (/ /& & ) (/ /) (/ /) i j M M bL bL i j i j i j i bL bL C ghun w w T ghun S ghun C ghun w w C lan w w T ghun T lan \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = = = + \u2211 \u2211 (11) 1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j N N bR bR i j i j i j i bR bR C ghun w w T ghun S ghun C ghun w w C lan",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .",
                "sec_num": null
            },
            {
                "text": "We applied WLDB with the training data mentioned in Section 3.1 to find the best ranges in determining the pronunciation of \"\u6211\u5011\" (we). We defined a window of (M, N), where M was the number of words to the left and N was number of words to the right. Three hundred and sixty (19*19-1=360) different windows were applied in the analysis of using the WLDB model. As shown in the 2 nd row of Table 5 , the best result of the inside test was 94.25% with the best range being 11 words to the left of \"\u6211\u5011\" (we) and 7 words to the right.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 388,
                        "end": 395,
                        "text": "Table 5",
                        "ref_id": "TABREF10"
                    }
                ],
                "eq_spans": [],
                "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .",
                "sec_num": null
            },
            {
                "text": "The best result when the correct pronunciation of \"\u6211\u5011\" (we) was /lan/ was 99.87%, when the window was (11, 5). Nevertheless, the result for /ghun/ with the same window was not good. The highest accuracy achieved was 89.69%. As shown in the 3 rd row of Table 5 , the best result when applying WLDB when the pronunciation was /ghun/ was 93.48%, when the window was (4, 13). This shows that WLDB does not work well when the pronunciation of \"\u6211\u5011\" (we) is /ghun/. We applied the WLDB model to the test data using a window of (11, 7). The overall accuracy of outside tests was 85.72%. The accuracies were 83.26% and 93.10% when the pronunciations were /ghun/ and /lan/, respectively.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 252,
                        "end": 259,
                        "text": "Table 5",
                        "ref_id": "TABREF10"
                    }
                ],
                "eq_spans": [],
                "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .",
                "sec_num": null
            },
            {
                "text": "Based on the results from the two models, WU and WLDB, we can draw the following The Polysemy Problem, an Important Issue in a 57 Chinese to Taiwanese TTS System conclusions: the word-based long distance bigram language model is good when the pronunciation is /lan/, while the word-based unigram language model works well when the pronunciation is /ghun/. In this section, we propose combining the models to achieve better results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The combined Approach",
                "sec_num": "6."
            },
            {
                "text": "According to the inside experimental results shown in Table 4 and Table 5 , we will combine the WU model with a window of (12, 6) and the WLDB model with a window of (11, 5) as our combined approach. This combination of WU and WLDB is similar to the approach used by Yu and Huang. We will try to find the possibility of making a correct choice when using WU or WLDB, which will be termed \"confidence\". We will adopt the output of the method with higher confidence.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 54,
                        "end": 73,
                        "text": "Table 4 and Table 5",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "The combined Approach",
                "sec_num": "6."
            },
            {
                "text": "The first step in this process is to find a confidence curve for each model. The goal is to estimate the confidence for each approach and assess the difference. The higher score is more likely to be the correct answer. To do so, we measure the accuracy of each division and use a regression to estimate the confidence measure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confidence Measure",
                "sec_num": "6.1"
            },
            {
                "text": "Algorithm 1, below, will be used to find the confidence curve for the word-based unigram language model. As the total number of words in each input sample is not constant, we must first normalize the scores Su i (/lan/) and Su i (/ghun/). We will find the precision rates (PR k ) in the interval [0, 1] for |NSu i (/ghun/)-NSu i (/lan/)| in Step 2 of Algorithm 1 for each i. We then find a regression curve for the PR k . The regression curve is used to estimate the probability of making a correct decision when using WU. Therefore, it follows that, the higher the probability is, the greater the confidence we can have in the results from WU.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confidence Measure",
                "sec_num": "6.1"
            },
            {
                "text": "Input: The score for each training sample, Su i (/lan/) and Su i (/ghun/), where i=1,2,3, \u2026, n and n is the number of training samples. Output: A function for the confidence curve for the given Su i (/lan/) and Su i (/ghun/), i=1,2,3, \u2026, n. Algorithm:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Step 1: Normalize Su i (/lan/) and Su i (/ghun/) for each training sample i using the following formula: NSu i (/lan/)=Su i (/lan/)/(Total number of words in training sample i) NSu i (/ghun/)=Su i (/ghun/)/(Total number of words in training sample i)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Step 2: Let d i =| NSu i (/ghun/)-NSu i (/lan/)| and let D={d 1 , d 2 ,\u2026,d n }. Find the accuracy rate for each interval using the following formula:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "PR k = C k /N k , k=1, 2, \u2026, 18",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Here, C k is the number of correct conjectures of training sample i with (k-1)/18 d i < (k+1)/18, and N k is the number of training sample i with (k-1)/18",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "d i < (k+1)/18.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Step 3: Find a regression curve for PR 1 , PR 2 , \u2026, PR 18 . Output the function of the regression curve. The confidence curve for WU is the black line in Figure 3 . The function derived was f(x)=0.1711*ln(x)+1.0357, where x is the absolute value of the difference between the normalized Su i (/lan/) and Su i (/ghun/).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 155,
                        "end": 163,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Algorithm 2 is used to find the confidence curve for the word-based long-distance bigram language model (WLDB). We began by normalizing the scores of pronunciation Sb i (/lan/) and Sb i (/ghun/). In Step 2, we find the precision rates (PR k ) in the interval [0, 1] then calculate a regression curve for the PR k . The regression curve will be used to estimate the probability of making a correct decision. Again, it follows that, the higher the probability, the more confidence in the results from using WLDB.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "The confidence curve of WLDB is the black line in Figure 4 , in which the function is f(x) = 0.2346*ln(x) + 1.0523, where x is the difference between the normalized Sp i (/lan/) and Sp i (/ghun/).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 50,
                        "end": 58,
                        "text": "Figure 4",
                        "ref_id": "FIGREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Algorithm 1: Finding the confidence curve of WU.",
                "sec_num": null
            },
            {
                "text": "Input: The score of each training sample, named Sb i (/lan/) and Sb i (/ghun/), where i=1, 2, 3, \u2026, n, and n is the number of training samples. Output: A function for the confidence curve for the given Sb i (/lan/) and Sb i (/ghun/), i=1, 2, 3, \u2026, n. Algorithm:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 2: Find the confidence curve of WLDB",
                "sec_num": null
            },
            {
                "text": "Step 1: Normalize Sb i (/lan/) and Sb i (/ghun/) for each training sample i using the following formula: NSb i (/lan/)=Sb i (/lan/)/(Total number of words in training sample i) 2 NSb i (/ghun/)=Sb i (/ghun/)/(Total number of words in training sample i) 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 2: Find the confidence curve of WLDB",
                "sec_num": null
            },
            {
                "text": "Step 2: Let d i =| NSb i (/ghun/)-NSb i (/lan/)| and let D={d 1 , d 2 ,\u2026,d n }. Find the accuracy rate for each interval using the following formula: PR k = C k /N k , k=1, 2, \u2026, 13 where C k is the number of correct conjectures of training samples i with (k-1)/13 d i <(k+1)/13 and N k is the number of training samples i with (k-1)/13 d i <(k+1)/13. Step 3: Find a regression curve for PR 1 , PR 2 , \u2026, PR 13 . Output the function of the regression curve. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 2: Find the confidence curve of WLDB",
                "sec_num": null
            },
            {
                "text": "After the functions for the confidence curves for the two models have been derived, the combined approach can be applied. The two models are used to determine the pronunciation of \"\u6211\u5011\" (we) for a given input text. The two functions for the confidence curves, derived in Section 6.1, are applied to evaluate the degree of confidence in the two models. Let the confidence curves of the two models be C WU for WU and C WLDB for WLDB. We will use the results obtained using WU under the condition C WU > C WLDB . Otherwise, we will use the results obtained from using the WLDB model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Determining the Pronunciation for \"\u6211\u5011\" (we)",
                "sec_num": "6.2"
            },
            {
                "text": "Consider Figure 4 , which is derived from the training data. The x-axis is the normalized difference between the two scores. The y-axis is the percentage of correct decisions. Take the example sentence \"\u5982\u679c\u82b1\u65d7\u5e0c\u671b\u7e7c\u7e8c\u505a\u6211\u5011\u7684\u5927\u80a1\u6771\uff0c\u6211\u5011\u9084\u662f\u5f88\u6b61\u8fce\". We want to predict the pronunciation of the first \"\u6211\u5011\" (we) in the above sentence. Its confidences were 0.875 for the WU model (choosing /ghun/) and 0.761 for the WLDB model (choosing /lan/). Since the confidence of the WU model was higher than that of the WLDB model, we adopted /ghun/ as the pronunciation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 9,
                        "end": 17,
                        "text": "Figure 4",
                        "ref_id": "FIGREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Determining the Pronunciation for \"\u6211\u5011\" (we)",
                "sec_num": "6.2"
            },
            {
                "text": "We used the 639 testing samples described in Section 3.1. Among the 639 testing samples, there were 479 samples with the pronunciation /ghun/ and 160 samples with the pronunciation /lan/. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results Using Combined Models",
                "sec_num": "6.3"
            },
            {
                "text": "We used the test data mentioned in 3.1 as the experimental data. The overall accuracy rate from applying the combined approach was 93.6%. The accuracy rate was 95.00% when the answer was /lan/, and the accuracy rate was 93.1% when the answer was /ghun/. Based on these results, it can be concluded that the combination of the two models works very well in determining the pronunciation of the word \"\u6211\u5011\" (we) for a given Chinese text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Precision",
                "sec_num": null
            },
            {
                "text": "The three approaches, WU, WLDB, and combined, are compared in Table 6 . As shown in Table 6 , the word-based long-distance bigram language model (WLDB) worked well in the case of /lan/ and achieved an accuracy rate of 93.10%. The word-based unigram language (WU) model worked well in the case of /ghun/ and achieved an accuracy rate of 90.40%. The combined approach, however, achieved higher accuracy rates in both cases, achieving accuracy as high as 93.6%. There is an important issue in the combined approach. When we use a language model like WLDB, we may encounter the problem of data scarcity. If data is scarce, the combined approach will use the result of the word-based unigram language model. Table 7 compares the accuracy of the approaches used in this paper. The findings show that the combined approach (CP) performed the best. We can conclude that layered approach does not work well in determining the pronunciation of \"\u6211\u5011\" (we) in Taiwanese. It also shows that the polysemy problem caused by \"\u6211\u5011\" (we) is more difficult and quite different from that caused by the words \"\u4e0a\" (up), \"\u4e0b\" (down), and \"\uf967\" (no). This also shows that the viewpoints we gave in Section 2 are reasonable. For our approaches, we might encounter the problem of data sparseness, especially with WLDB. It seems that this cannot be avoided in processing languages like Taiwanese, for which corpora are rare. We have tried to use part-of-speech information as the features in our approaches. The experimental results are not good. We also find that most cases can be solved by using WU or WLDB, and only about 5% are solved by using default values. This shows that our approach is suitable for the current data size. We have shown that our combined approach is promising.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 62,
                        "end": 69,
                        "text": "Table 6",
                        "ref_id": "TABREF11"
                    },
                    {
                        "start": 84,
                        "end": 91,
                        "text": "Table 6",
                        "ref_id": "TABREF11"
                    },
                    {
                        "start": 703,
                        "end": 710,
                        "text": "Table 7",
                        "ref_id": "TABREF12"
                    }
                ],
                "eq_spans": [],
                "section": "Precision",
                "sec_num": null
            },
            {
                "text": "This paper proposes an elegant approach to determine the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. Our methods work very well in determining the pronunciations of the Chinese word \"\u6211\u5011\" (we) in a C2T TTS system. Experimental results also show that the model used is better than the layered approach, the WU model, and the WLDB model. Polysemy problems in translating C2T are very common and it is imperative that they are solved in a C2T TTS system. We will continue to focus on other important polysemy problems in a C2T TTS system in the future.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Works",
                "sec_num": "7."
            },
            {
                "text": "The polysemy problem of \"\u6211\u5011\" (we) is more difficult than that of other words in Taiwanese. We have proposed a combined approach for this problem. If more training data can be prepared, the proposed approach can be expected to achieve better results. Nevertheless, as the training data needs to be processed manually, we will attempt to propose unsupervised approaches in the future.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Works",
                "sec_num": "7."
            },
            {
                "text": "To build a quality C2T TTS system is a long-term project because of the many issues in the text analysis phase. In contrast to a Mandarin TTS system, a C2T TTS system needs more textual analysis functions. In addition, two imperative tasks are the development of solutions for the polysemy problem and the tone sandhi problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Works",
                "sec_num": "7."
            },
            {
                "text": "Ming-Shing Yu and Yih-Jeng Lin",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "A Study of Evaluation Method for Synthetic Mandarin Speech",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Bao",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "The Third International Symposium on Chinese Spoken Language Processing",
                "volume": "",
                "issue": "",
                "pages": "383--386",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bao, H., Wang, A., & Lu, S. (2002). A Study of Evaluation Method for Synthetic Mandarin Speech, in Proceedings of ISCSLP 2002, The Third International Symposium on Chinese Spoken Language Processing, 383-386.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A Mandarin Text-to-Speech System",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "H"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "H"
                        ],
                        "last": "Hwang",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [
                            "R"
                        ],
                        "last": "Wang",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Computational Linguistics and Chinese Language Processing",
                "volume": "1",
                "issue": "1",
                "pages": "87--100",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen, S. H., Hwang, S. H., & Wang, Y. R. (1996). A Mandarin Text-to-Speech System, Computational Linguistics and Chinese Language Processing, 1(1), 87-100.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A Hybrid Statistical/RNN Approach to Prosody Synthesis for Taiwanese TTS",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "C"
                        ],
                        "last": "Ho",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ho, C. C. (2000). A Hybrid Statistical/RNN Approach to Prosody Synthesis for Taiwanese TTS, Master thesis, Department of Communication Engineering, National Chiao Tung University.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Implementation of Tone Sandhi Rules and Tagger for Taiwanese TTS",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "Y"
                        ],
                        "last": "Hunag",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hunag, J. Y. (2001). Implementation of Tone Sandhi Rules and Tagger for Taiwanese TTS, Master thesis, Department of Communication Engineering, National Chiao Tung University.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Text to Pronunciation Conversion in Taiwanese",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "H"
                        ],
                        "last": "Hwang",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hwang, C. H. (1996). Text to Pronunciation Conversion in Taiwanese, Master thesis, Institute of Statistics, National Tsing Hua University.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "The Improving Techniques for Disambiguating Non-Alphabet Sense Categories",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "L"
                        ],
                        "last": "Hwang",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of ROCLING XIII",
                "volume": "",
                "issue": "",
                "pages": "67--86",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hwang, F. L., Yu, M. S., & Wu, M. J. (2000). The Improving Techniques for Disambiguating Non-Alphabet Sense Categories, in Proceedings of ROCLING XIII, 67-86.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A Taiwanese Text-to-Speech System with Application to Language Learning",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Liang",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "C"
                        ],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [
                            "C"
                        ],
                        "last": "Chiang",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "C"
                        ],
                        "last": "Lyu",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "Y"
                        ],
                        "last": "Lyu",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the IEEE International Conference on Advanced Learning Technologies",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Liang, M. S., Yang, R. C., Chiang, Y. C., Lyu, D. C., & Lyu, R. Y. (2004). A Taiwanese Text-to-Speech System with Application to Language Learning, in Proceedings of the IEEE International Conference on Advanced Learning Technologies, 2004.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "J"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [
                            "H"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "International Journal of Computational Linguistics and Chinese Language Processing",
                "volume": "14",
                "issue": "1",
                "pages": "59--84",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, C. J. & Chen, H. H. (1999). A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan, International Journal of Computational Linguistics and Chinese Language Processing, 14(1), 59-84.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "The Prediction of Pronunciation of Polyphonic Characters in a Mandarin Text-to-Speech System",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [
                            "C"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, Y. C. (2006). The Prediction of Pronunciation of Polyphonic Characters in a Mandarin Text-to-Speech System, Master thesis, Department of Computer Science and Engineering, National Chung Hsing University.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "An Efficient Mandarin Text-to-Speech System on Time Domain",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [
                            "J"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "IEICE Transactions on Information and Systems",
                "volume": "",
                "issue": "6",
                "pages": "545--555",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, Y. J. & Yu, M. S. (1998). An Efficient Mandarin Text-to-Speech System on Time Domain, IEICE Transactions on Information and Systems, E81-D(6), June 1998, 545-555.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A Multi-Layered Approach to the Polysemy Problems in a Chinese to Taiwanese TTS System",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [
                            "J"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "Y"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [
                            "T"
                        ],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceeding of 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing",
                "volume": "",
                "issue": "",
                "pages": "428--435",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, Y. J., Yu, M. S., Lin, C. Y., & Lin, Y. T. (2008). A Multi-Layered Approach to the Polysemy Problems in a Chinese to Taiwanese TTS System, in Proceeding of 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, June, 2008, 428-435.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "An Implementation and Analysis of Mandarin Speech Synthesis Technologies",
                "authors": [
                    {
                        "first": "H",
                        "middle": [
                            "M"
                        ],
                        "last": "Lu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lu, H. M. (2002). An Implementation and Analysis of Mandarin Speech Synthesis Technologies, M. S. Thesis, Institute of Communication Engineering, National Chiao-Tung University, June 2002.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Improving Intonation Modules in Chinese TTS Systems",
                "authors": [
                    {
                        "first": "N",
                        "middle": [
                            "H"
                        ],
                        "last": "Pan",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "The 13th Conference on Artificial Intelligence and Applications (TAAI 2008",
                "volume": "",
                "issue": "",
                "pages": "329--336",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pan, N. H. & Yu, M. S. (2008). Improving Intonation Modules in Chinese TTS Systems, in The 13th Conference on Artificial Intelligence and Applications (TAAI 2008), 329-336, Nov. 21-22, 2008, Yilan, Taiwan.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A Mandarin Text to Taiwanese Speech System",
                "authors": [
                    {
                        "first": "N",
                        "middle": [
                            "H"
                        ],
                        "last": "Pan",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "M"
                        ],
                        "last": "Tsai",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "The 13th Conference on Artificial Intelligence and Applications (TAAI 2008",
                "volume": "",
                "issue": "",
                "pages": "1--5",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pan, N. H., Yu, M. S., & Tsai, C. M. (2008). A Mandarin Text to Taiwanese Speech System, in The 13th Conference on Artificial Intelligence and Applications (TAAI 2008), 1-5, Nov. 21-22, 2008, Yilan, Taiwan.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Issues in Text-to-Speech Conversion for Mandarin",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Shih",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Sproat",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Computational Linguistics and Chinese Language Processing",
                "volume": "1",
                "issue": "",
                "pages": "37--86",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shih, C. & Sproat, R. (1996). Issues in Text-to-Speech Conversion for Mandarin, Computational Linguistics and Chinese Language Processing, 1(1), 37-86.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Variable-Length Unit Selection in TTS Using Structural Syntactic Cost",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "H"
                        ],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "C"
                        ],
                        "last": "Hsia",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "F"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "F"
                        ],
                        "last": "Wang",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "IEEE Transactions on Audio, Speech, and Language Processing",
                "volume": "15",
                "issue": "4",
                "pages": "1227--1235",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wu, C. H., Hsia, C. C., Chen, J. F., & Wang, J. F. (2007). Variable-Length Unit Selection in TTS Using Structural Syntactic Cost, IEEE Transactions on Audio, Speech, and Language Processing, 15(4), 1227-1235.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "An Implementation of Taiwanese Text-to-Speech System",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [
                            "C"
                        ],
                        "last": "Yang",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "The Polysemy Problem",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yang, Y. C. (1999). An Implementation of Taiwanese Text-to-Speech System, Master thesis, Department of Communication Engineering, National Chiao Tung University, 1999. The Polysemy Problem, an Important Issue in a 63",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Chinese to Taiwanese TTS System",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chinese to Taiwanese TTS System",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A Mandarin Text-to-Speech System Using Prosodic Hierarchy and a Large Number of Words",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "Y"
                        ],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "H"
                        ],
                        "last": "Hsu",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [
                            "H"
                        ],
                        "last": "Tsai",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. 17th Conference on Computational Linguistics and Speech Processing",
                "volume": "",
                "issue": "",
                "pages": "183--202",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yu, M. S., Chang, T. Y., Hsu, C. H., & Tsai, Y. H. (2005). A Mandarin Text-to-Speech System Using Prosodic Hierarchy and a Large Number of Words, in Proc. 17th Conference on Computational Linguistics and Speech Processing, (ROCLING XVII), 183-202, Sep. 15-16, 2005, Tainan, Taiwan.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Disambiguating the Senses of Non-Text Symbols for Mandarin TTS Systems with a Three-Layer Classifier",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [
                            "L"
                        ],
                        "last": "Huang",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Speech Communication",
                "volume": "39",
                "issue": "3-4",
                "pages": "191--229",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yu, M. S. & Huang, F. L. (2003). Disambiguating the Senses of Non-Text Symbols for Mandarin TTS Systems with a Three-Layer Classifier, Speech Communication, 39(3-4), 191-229.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "An Improvement on the Implementation of Taiwanese TTS System",
                "authors": [
                    {
                        "first": "X",
                        "middle": [
                            "R"
                        ],
                        "last": "Zhong",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhong, X. R. (1999). An Improvement on the Implementation of Taiwanese TTS System, Master thesis, Department of Communication Engineering, National Chiao Tung University.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "text": "A Common module structure of a C2T TTS System.",
                "uris": null,
                "num": null
            },
            "FIGREF1": {
                "type_str": "figure",
                "text": "An example applying the layered approach. is (2/3, 1/3). Output /ghun/.",
                "uris": null,
                "num": null
            },
            "FIGREF3": {
                "type_str": "figure",
                "text": "Estimate the confidence curve using WU. The function we attained is f(x)=0.1711*ln(x)+1.0357.",
                "uris": null,
                "num": null
            },
            "FIGREF5": {
                "type_str": "figure",
                "text": "Estimate the confidence curve of WLDB. The function we attained is f(x)=0.2346*ln(x)+1.0523.",
                "uris": null,
                "num": null
            },
            "TABREF0": {
                "type_str": "table",
                "num": null,
                "text": "The Polysemy Problem, an Important Issue in a 45 Chinese to Taiwanese TTS System",
                "content": "<table><tr><td/><td>Input Chinese texts</td></tr><tr><td>Bilingual</td><td>Text Analysis</td></tr><tr><td>Lexicon</td><td/></tr><tr><td/><td>Tone Sandhi</td></tr><tr><td/><td>Prosody Generation</td></tr><tr><td>Synthesis units</td><td>Speech Synthesis</td></tr><tr><td/><td>Synthesized</td></tr><tr><td/><td>Taiwanese Speech</td></tr></table>",
                "html": null
            },
            "TABREF4": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td>News Category</td><td>Number of News Items</td><td>Number of News Items Containing the word \"\u6211\u5011\"</td><td>Percentage</td></tr><tr><td>International News</td><td>2242</td><td>326</td><td>14.5%</td></tr><tr><td>Travel News</td><td>9273</td><td>181</td><td>1.9%</td></tr><tr><td>Local News</td><td>6066</td><td>95</td><td>1.5%</td></tr><tr><td>Entertainment News</td><td>3231</td><td>408</td><td>12.6%</td></tr><tr><td>Scientific News</td><td>3520</td><td>100</td><td>2.8%</td></tr><tr><td>Social News</td><td>4936</td><td>160</td><td>3.2%</td></tr><tr><td>Sports News</td><td>2811</td><td>193</td><td>6.9%</td></tr><tr><td>Stock News</td><td>8066</td><td>83</td><td>1.0%</td></tr><tr><td>Total Number of News Items</td><td>40145</td><td>1546</td><td>3.9%</td></tr></table>",
                "html": null
            },
            "TABREF5": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td>Frequency of \"\u6211\u5011\"</td><td>Pronunciation /lan/</td><td>Pronunciation /ghun/</td><td>Total Frequency</td></tr><tr><td>Training data</td><td>640</td><td>1,916</td><td>2,556</td></tr><tr><td>Test data</td><td>160</td><td>479</td><td>639</td></tr><tr><td>Token frequency of \"\u6211\u5011\"</td><td>800</td><td>2,395</td><td>3,195</td></tr></table>",
                "html": null
            },
            "TABREF6": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td/><td colspan=\"2\">Number of test samples Number of correct samples</td><td>Accuracy rate</td></tr><tr><td>/ghun/</td><td>479</td><td>445</td><td>92.90%</td></tr><tr><td>/lan/</td><td>160</td><td>47</td><td>29.38%</td></tr><tr><td>Total</td><td>639</td><td>492</td><td>77.00%</td></tr></table>",
                "html": null
            },
            "TABREF8": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td>Window Size (M, N)</td><td>Accuracy when the pronunciation is /ghun/</td><td>Accuracy when the pronunciation is /lan/</td><td>Overall accuracy</td></tr><tr><td>(17, 10)</td><td>91.04%</td><td>74.92%</td><td>87.00%</td></tr><tr><td>(12,6)</td><td>94.01%</td><td>45.48%</td><td>81.85%</td></tr><tr><td>(19, 14)</td><td>88.75%</td><td>77.88%</td><td>86.03%</td></tr></table>",
                "html": null
            },
            "TABREF10": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td>Window Size (k L , k R )</td><td>Accuracy when the pronunciation is /ghun/</td><td>Accuracy when the pronunciation is /lan/</td><td>Overall accuracy</td></tr><tr><td>(11,7)</td><td>93.33%</td><td>97.04%</td><td>94.25%</td></tr><tr><td>(4, 13)</td><td>93.48%</td><td>93.61%</td><td>93.52%</td></tr><tr><td>(11,5)</td><td>89.69%</td><td>99.87%</td><td>92.15%</td></tr></table>",
                "html": null
            },
            "TABREF11": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td/><td colspan=\"2\">Accuracy using WU Accuracy using WLDB</td><td>Accuracy combing the two models</td></tr><tr><td>/ghun/</td><td>90.40%</td><td>83.26%</td><td>93.10%</td></tr><tr><td>/lan/</td><td>31.25%</td><td>93.10%</td><td>95.00%</td></tr><tr><td>Total</td><td>75.59%</td><td>85.72%</td><td>93.60%</td></tr></table>",
                "html": null
            },
            "TABREF12": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td/><td>WU</td><td>WLDB</td><td>LP</td><td>CP</td></tr><tr><td>/ghun/</td><td colspan=\"2\">90.40% 83.26%</td><td>92.90%</td><td>93.10%</td></tr><tr><td>/lan/</td><td colspan=\"2\">31.25% 93.10%</td><td>29.38%</td><td>95.00%</td></tr><tr><td>Total</td><td colspan=\"2\">75.59% 85.72%</td><td>77.00%</td><td>93.60%</td></tr></table>",
                "html": null
            }
        }
    }
}