File size: 84,941 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
{
    "paper_id": "O09-1015",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:10:50.962189Z"
    },
    "title": "Improving Translation Fluency with Search-Based Decoding and a Monolingual Statistical Machine Translation Model for Automatic Post-Editing",
    "authors": [
        {
            "first": "Jing-Shin",
            "middle": [],
            "last": "Chang",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Sheng-Sian",
            "middle": [],
            "last": "Lin",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The BLEU scores and translation fluency for the current state-of-the-art SMT systems based on IBM models are still too low for publication purposes. The major issue is that stochastically generated sentences hypotheses, produced through a stack decoding process, may not strictly follow the natural target language grammar, since the decoding process is directed by a highly simplified translation model and n-gram language model, and a large number of noisy phrase pairs may introduce significant search errors. This paper proposes a statistical post-editing (SPE) model, based on a special monolingual SMT paradigm, to \" translate\"disfluent sentences into fluent sentences. However, instead of conducting a stack decoding process, the sentence hypotheses are searched from fluent target sentences in a large target language corpus or on the Web to ensure fluency. Phrase-based local editing, if necessary, is then applied to correct weakest phrase alignments between the disfluent and searched hypotheses using fluent target language phrases; such phrases are segmented from a large target language corpus with a global optimization criterion to maximize the likelihood of the training sentences, instead of using noisy phrases combined from bilingually wordaligned pairs. With such search-based decoding, the absolute BLEU scores are much higher than automatic post editing systems that conduct a classical SMT decoding process. We are also able to fully correct a significant number of disfluent sentences into completely fluent versions. The BLEU scores are significantly improved. The evaluation shows that on average 46% of translation errors can be fully recovered, and the BLEU score can be improved by about 26%.",
    "pdf_parse": {
        "paper_id": "O09-1015",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The BLEU scores and translation fluency for the current state-of-the-art SMT systems based on IBM models are still too low for publication purposes. The major issue is that stochastically generated sentences hypotheses, produced through a stack decoding process, may not strictly follow the natural target language grammar, since the decoding process is directed by a highly simplified translation model and n-gram language model, and a large number of noisy phrase pairs may introduce significant search errors. This paper proposes a statistical post-editing (SPE) model, based on a special monolingual SMT paradigm, to \" translate\"disfluent sentences into fluent sentences. However, instead of conducting a stack decoding process, the sentence hypotheses are searched from fluent target sentences in a large target language corpus or on the Web to ensure fluency. Phrase-based local editing, if necessary, is then applied to correct weakest phrase alignments between the disfluent and searched hypotheses using fluent target language phrases; such phrases are segmented from a large target language corpus with a global optimization criterion to maximize the likelihood of the training sentences, instead of using noisy phrases combined from bilingually wordaligned pairs. With such search-based decoding, the absolute BLEU scores are much higher than automatic post editing systems that conduct a classical SMT decoding process. We are also able to fully correct a significant number of disfluent sentences into completely fluent versions. The BLEU scores are significantly improved. The evaluation shows that on average 46% of translation errors can be fully recovered, and the BLEU score can be improved by about 26%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Translation fluency of Machine Translation systems is a serious issue in the current SMT research works. With the research efforts for the past tens of years, the performances are still far from satisfactory. In translating English to Chinese, for instance, the BLEU scores [16] range only between 0.21 and 0.29 [22, 5, 17] , depending on test sets and numbers of reference translations. Such translation quality is extremely disfluent for human readers. We therefore propose a statistical post-editing (SPE) model, based on a special monolingual SMT framework, for improving the fluency and adequacy of translated sentences.",
                "cite_spans": [
                    {
                        "start": 274,
                        "end": 278,
                        "text": "[16]",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 312,
                        "end": 316,
                        "text": "[22,",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 317,
                        "end": 319,
                        "text": "5,",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 320,
                        "end": 323,
                        "text": "17]",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fluency Problems with Statistical Machine Translations",
                "sec_num": "1.1"
            },
            {
                "text": "The classical IBM SMT models [1, 2] formulate the translation problem of a source sentence F as finding the best translation E* from some stack decoded hypotheses, E, such that: The arg max E operation implies to generate candidate target sentences E of F so that the SMT model can score each one, based on the TM and LM scores and select the best candidate. The process of candidate generation is known as the decoding process. The conventional decoding process is significantly affected by the TM and LM scores; only those candidates that satisfy the underlying criteria of the TM and LM will receive high scores.",
                "cite_spans": [
                    {
                        "start": 29,
                        "end": 32,
                        "text": "[1,",
                        "ref_id": null
                    },
                    {
                        "start": 33,
                        "end": 35,
                        "text": "2]",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fluency Problems with Statistical Machine Translations",
                "sec_num": "1.1"
            },
            {
                "text": "\uf028",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fluency Problems with Statistical Machine Translations",
                "sec_num": "1.1"
            },
            {
                "text": "Unfortunately, to make the SMT computationally feasible, the TM and LM are highly simplified. Therefore, the candidates are not really generated based on target language grammar, but based on the model constraints. For instance, the classical SMT model does not prefer word re-ordering with long distance movement. Such candidates are then not generated regardless of the possibility that the target grammar might prefer them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fluency Problems with Statistical Machine Translations",
                "sec_num": "1.1"
            },
            {
                "text": "There are three directions to improve the translation fluency with the classical SMT model, Equation (1) . Firstly, we can improve the Translation Model (TM) to fit the source-target transfer process. Secondly, we can improve the Language Model (LM) to respect the target language grammar. Finally, we could try to generate better and much more fluent candidates in the decoding process so that the TM and LM can select the real best one from fluent candidates, rather than from junk sentences.",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 104,
                        "text": "(1)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "The research communities normally focus on the TM and LM components by assuming that there are good ways to generate good candidates for scoring. Actually, most attention is paid to the Translation Model (TM); LM and decoding were not gaining the same weight. In particular, people tend to think that the candidate generation process guided by the highly simplified TM and LM will eventually generate good candidates.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "Unfortunately, to make the computation feasible, the classical SMT models have very low expressive power in the Translation Model (TM) and Language Model (LM) components. It formulates the TM in terms of the fertility probability, lexical translation probability and distortion probability [1, 2] . A word-based 3-gram model is usually used as the language model (LM). Longer n-grams are used at higher training cost and severe data sparseness.",
                "cite_spans": [
                    {
                        "start": 290,
                        "end": 293,
                        "text": "[1,",
                        "ref_id": null
                    },
                    {
                        "start": 294,
                        "end": 296,
                        "text": "2]",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "In fact, the candidates of the target sentence, which are hidden in the arg max E operator, are generated as a stochastic process in most SMT today. Starting from a particular state, the next word is predicted based on a local n-gram window within a distance allowed by the distortion criterion; the possible paths are exploited using stack decoding, beam search or other searching algorithms. The candidates generated in this way thus may be only \" piecewise\"consistent with the target language grammar, but may not be really globally grammatical or fluent. This means that the TM and LM are not scoring a complete sentence but some segments pasted by the n-gram LM. It is then not likely to be fluent all the time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "This decoding process therefore sometimes falls into the \" garbage-in and garbage-out\" situation. No matter how well-formulated the TM and LM may be, if the stochastically generated candidates do not include the correct and fluent translation, the system will eventually deliver a garbage output, that is, a disfluent sentence, as the best one. This kind of error is known as searching error. Because the TM and LM have limited expressive power to describe the real criteria that carry the generation process, the decoding process might only generate noisy sentence segments and thus disfluent sentences for scoring. This could lead to bad performance in terms of BLEU score or human judgments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "Phrase-based SMT had partially resolved the expressive power issue of TM and LM by using longer word sequences. However, the acquisition of \" phrases\"has its own problems. In particular, most phrase-based SMT acquires the phrase pairs by conducting bilingual word alignment first. Adjacent words are then connected in some heuristic ways [12, 13, 14, 15] , which do not have direct link with the source or target grammar, to form the \" phrases\" . The phrases generated in this way normally do not satisfy any global optimization criteria related to the target grammar, such as maximizing the likelihood of the target language sentences. The quality of such phrases is therefore greatly affected by the word alignment accuracy; and, the phrases for the target language side may not really respect the target grammar. Under such circumstances, a huge number of noisy \" phrases\"will be introduced and significantly enlarge the searching space. The stochastically generated phrase sequences thus may not correspond to good candidate sentences either.",
                "cite_spans": [
                    {
                        "start": 338,
                        "end": 342,
                        "text": "[12,",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 343,
                        "end": 346,
                        "text": "13,",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 347,
                        "end": 350,
                        "text": "14,",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 351,
                        "end": 354,
                        "text": "15]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "To summarize, the application of word-for-word or phrase-to-phrase translation (with \" noisy\"phrases) plus a little bit local word/phrase re-ordering in classical SMT might not generate fluent target sentences that respect the target grammar. In particular, many target specific lexical items and morphemes cannot be generated through this kind of models. If they do, they may be generated in very special ways. This could be a significant reason why the SMT models do not work well after the long period of research.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "The implication is that we might have to examine the arg max E operation, that is, the decoding or searching process, in the classical SMT models more carefully. We should try decoding method that respect target grammar more, instead of following the criteria set forth by the TM and LM of the SMT model, which encode highly simplified version of the target grammar. Only with a decoding process that respect the target grammar, will the system generate fluent candidates at the first place before submitting the candidates to the TM and LM for scoring.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "Furthermore, a phrase-based language model, instead of word-based n-gram model for the target side may improve the fluency of machine translation further since more context words can be consulted, if the \" phrases\"are not noisy. To avoid a huge number of noisy source-dependent phrases that might be harmful for fluency and searching, such phrases may better be trained from a target corpus, instead of being acquired from bilingually wordaligned chunks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LM and Decoding",
                "sec_num": "1.2"
            },
            {
                "text": "Instead of developing new models for the TM and LM, an alternative to improve the translation fluency is to cascade an Automatic Post-Editing (APE) module to the translation output of an MT/SMT system. While the classical SMT models may not be suitable for directly generating fluent translation, due to the limited expressive power of the TM and LM and search errors of the decoding process, an SMT or its variant may be sufficient for reranking hypotheses in the automatic post editing purposes, if appropriate hypotheses generation mechanism is available. Actually, we can regard a post-editing process as a translation process from disfluent sentence to fluent sentence. This is particularly true if the disfluency is limited to local editing operations like insertion of target specific morphemes, deletion of source-specific function words, and lexical substitution from many possible lexical choices. These kinds of errors are often seen in MT/SMT systems. Inspired by the above ideas, this paper propose a statistical post-editing (SPE) model based on a monolingual SMT paradigm for improving the translation fluency of an MT system, instead of improving the TM directly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "In this SPE model, the searching or decoding is a fluency-based search. We search fluent translations, based on the lexical hints of the disfluent sentence, from a large target text corpus or from the Web. Therefore, all candidates will be fluent ones. The best hypotheses reranked best by the SPE model will then serve as the post-edited version of the disfluent sentence. Sometimes, a searched sentence may not have a high translation score to justify itself as an appropriate translation. For instance, the target sentence pattern may be correct but different lexical choices have been made. In this case, automatic local editing is applied to the weakest alignments to incrementally patch the target sentence pattern with right target lexical items. By combining the grammatical (and fluent) sentence pattern of the searched sentence and the right lexical items from the disfluent sentence, the disfluent translation could be repaired to a fluent one incrementally. This may include some local insertion, deletion and lexical substitution operations over phrase pairs that are unlikely to be translation of each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "To really improve the fluency incrementally, the local editing process is applied in a manner that will monotonically increase the likelihood of the incrementally repaired sentence. To respect the target grammar further, the repair is phrase-based. In other words, phrasebased n-gram language model (n=1) is used in the translation score so that the likelihood of the repaired target sentence is incrementally increased during the local editing process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "In parallel with the development of our work, a few APE systems were also proposed [7, 20, 21, 8] with good results. Publicly available SMT systems (like Portage PBMT, Moses, etc.) are used directly as the post-editing module. They are trained using human post-edited target sentences with their un-edited MT outputs to learn the translation knowledge between disfluent (' source' ) and fluent (' target' ) sentences [20] . Alternatively, they may be trained using standard parallel corpora (Europarl, News Commentary, Job Bank, Hansard, etc.) where the disfluent sentences are generated using a rule-based MT (like SYSTRAN) or other SMT [21] .",
                "cite_spans": [
                    {
                        "start": 83,
                        "end": 86,
                        "text": "[7,",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 87,
                        "end": 90,
                        "text": "20,",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 91,
                        "end": 94,
                        "text": "21,",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 95,
                        "end": 97,
                        "text": "8]",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 417,
                        "end": 421,
                        "text": "[20]",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 638,
                        "end": 642,
                        "text": "[21]",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "Therefore, these works require substantial human post-editing costs to train the SMT. Or they need a sizable parallel corpus for training, which may not be available to many language pairs. In addition, it requires an RBMT or SMT pre-trained for translating the source corpus, which may not be available to many language pairs. Most importantly, these frameworks use the same decoding process as well as the TM and LM of the original SMT to generate their post-editing hypotheses. Therefore, the previously discussed performance issues that apply to classical SMT will also apply to such APE modules. The cascade of an SMT as an APE module might imply the use of a system with low BLEU performance to correct the outputs with low BLEU scores. The improvement could thus be substantially limited. This may be seen from the fact that the contribution of the APE becomes negligible as the training data is increased [21] .",
                "cite_spans": [
                    {
                        "start": 913,
                        "end": 917,
                        "text": "[21]",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "In contrast, we discard the stochastic decoding process, which might generate disfluent hypotheses, but search a large corpus for highly similar sentences to the disfluent sentence, and thus will have raw hypotheses with high BLEU scores. Additional local editing will further improve the fluency. Furthermore, our proposal can generate interesting error patterns automatically using the target language corpus alone. Therefore, the APE module can be constructed without a real MT system (although it would be better to have one in order to correct the specific errors of a specific system.). The following sections will discuss the formulation in more details.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical Post-Editing Model Based on Monolingual SMT",
                "sec_num": "1.3"
            },
            {
                "text": "In our work, we propose to adopt a Statistical Post-Editing (SPE) Model to translate disfluent s e n t e n c e s i n t o f l u e n t v e r s i o n s . S u c h a s y s t e m c a n b e r e g a r d e d a s a \" d i s f l u e n t -to-f l u e n t \" S MT . As will be seen later, it can be trained with a Monolingual SMT Model. Given a disfluent sentence E' translated form a source sentence F, the automatic post-editing problem can be formulated as finding the most fluent sentence E* from some candidate sentences E such that:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation for SPE",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028\uf029 E E E E E E E E Pr | ' Pr max arg ' | Pr max arg * \uf03d \uf03d",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Problem Formulation for SPE",
                "sec_num": "2"
            },
            {
                "text": "As usual, we will refer Pr(E' |E) as the translation model (TM), and Pr(E) as the language model (LM) of the SPE model. We thus encountered the same SMT problems to formulate the TM, LM and the decoding (or searching) process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation for SPE",
                "sec_num": "2"
            },
            {
                "text": "The automatic post-editing problem is intuitively easier than SMT since we can assumes that the disfluency is due to some local editing errors, such as mis-insertion or mis-deletion of function words, and wrong lexical choices. Under this assumption, we can formulate the TM as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Order-Preserved Translation Model",
                "sec_num": "2.1"
            },
            {
                "text": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 ' ' Pr ' | Pr ', | max Pr ', | Pr ', | Pr | p s p A A s p p E A E E E E A E E A E E A E E E \uf03d \uf03d \uf0bb \uf0bb \uf03d \uf0e5 \uf0d5 (3)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Order-Preserved Translation Model",
                "sec_num": "2.1"
            },
            {
                "text": "In Eqn. (3), phrase-aligned phrase pairs are represented by E' p and Ep for the disfluent and fluent versions, respectively. We assume that the most likely alignment A s , among all generic alignment pattern A, between E'and E is an \" order-preserved\"or \" sequential\" alignment between their constituents. We further assume that this most likely alignment has much higher probability than other alignments such that we don' t have to sum over all generic alignment patterns. In the post-editing context, this assumption may be reasonable if the disfluency results from simple local editing operations. In particular, if we are using phrase-based alignment, the word order within the phrases can be ignored. The order preservation assumption will be even more reasonable. We therefore assume that the TM is the product of the probabilities of sequentially aligned target phrase pairs. The phrase segmentation model for dividing E or E'into phrases will be further detailed later when discussing the target phrase-based LM. Given the segmented phrases, the best sequential alignment can easily be found using a standard dynamic programming algorithm for finding the \" shortest path\" .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Order-Preserved Translation Model",
                "sec_num": "2.1"
            },
            {
                "text": "The TM for the SPE model is special in that the training corpus can be easily acquired from a large monolingual corpus with fluent target sentences. Generating a disfluent version of the fluent monolingual corpus automatically based on some error model of the translation process will make this possible. One can then easily acquire the model parameters for translating disfluent sentences into fluent ones through a similar training process for a standard SMT. In comparison with standard SMT training, which requires a parallel bilingual corpus, the monolingual corpus is much easier to acquire.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Order-Preserved Translation Model",
                "sec_num": "2.1"
            },
            {
                "text": "To respect the fluency of the target language in the decoding process, the language model score Pr(E) should be evaluated based on long target language phrases, Ep, instead of target words. The \" phrases\"should also be defined independent of source-language in order not to introduce a huge number of noisy phrases as PBSMT normally did. The proposed LM for the current SPE, which is responsible for selecting fluent target segments, is therefore a phrasebased unigram model, instead of the widely used word-based n-gram model. In other words, we have",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "\uf028 \uf029 \uf028 \uf029 Pr Pr Ep E E Ep \uf0ce \uf03d \uf0d5 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "To avoid source-language dependency, we also decided not to define target phrases in terms of chunks of bilingually aligned words. Instead, the best target phrases are directly trained from the monolingual target corpus by optimizing the phrase-based unigram model. In other words, the best phrase sequence * p \uf072 for an n-word sentence 1 n w , will be the sequence, among all possible phrase segmentation, 1 m p , such that:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "\uf028 \uf029 \uf028 \uf029 1 1 * 1 1 arg max Pr | arg max Pr m m m n i p p i p p w p \uf03d \uf03d \uf0d5 \uf072 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "Fortunately, extracting monolingual phrases using the phrase-based uni-gram model can be done easily. The training method is just like the word based uni-gram word segmentation model [4] , which was frequently used in Chinese word segmentation tasks. Unsupervised training is easy for this. Upon convergence, a set of well-formed phrases can be acquired. (This set of phrases will be called a phrase example base, PEB. Phrases in the PEB will be used later in the Local Editing Algorithm for post-editing.)",
                "cite_spans": [
                    {
                        "start": 183,
                        "end": 186,
                        "text": "[4]",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "Since a phrase trained in this way can be longer than a 3-gram pattern, the modeling error could be reduced to some extend. Furthermore, the number of such phrases will be much smaller than those randomly combined phrases acquired from word-aligned word chunks. As a result, the estimation error due to data sparseness will be significantly reduced too. Unlike the rare parallel bilingual training corpus, the amount of such target language corpora is extremely large. Therefore, fluent phrases can be extracted easily. With phrases as the basic lexical unit, SPE model will reduces to Since a phrase can cover more than 3 words, the selected phrases might be more fluent than word trigrams. Such phrases will fit target grammar better and therefore will prefer more fluent target sentences in general.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase-Based Language Model",
                "sec_num": "2.2"
            },
            {
                "text": "One key issue that causes disfluency is the decoding process used in classical SMT. Most decoding process regard target sentence generation as a stochastic process, and only local context of finite length window is consulted while decoding. Therefore, the target sentences generated in this way are usually not fluent. Our work proposes to search fluent translation candidates from a huge target sentence base or from web documents, instead of using traditional decoding methods to generate the translation candidates. Since the large corpus and the Web documents are produced by native speakers, the target sentences thus searched are most likely fluent with high BLEU scores.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search-Based Decoding for Fluency",
                "sec_num": "2.3"
            },
            {
                "text": "Our current work simply used a heuristic matching score to extract a set of candidate sentences for a disfluent sentence. The candidates are then re-ranked using the translation score defined by the SPE model. The best candidate will be regarded as the post-edited version of the disfluent sentence if the translation score is higher than a threshold. Otherwise, it will be locally edited to incrementally increase its translation score. The matching score is simply the number of identical word tokens in two sentences, which is normalized by the average length of the two sentences. In other words, it is the percentage of word matches between two sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search-Based Decoding for Fluency",
                "sec_num": "2.3"
            },
            {
                "text": "We searched the candidate translations from the Academia Sinica Word Segmentation Corpus, ASWSC-2001 [6] , as well as Chinese webpages indexed by Google. (We assume that the target language is Chinese.) Different query strings will result in different returned pages. Totally, we have tried 4 models for searching:",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 104,
                        "text": "[6]",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search-Based Decoding for Fluency",
                "sec_num": "2.3"
            },
            {
                "text": "(1) Model C: search the corpus (only) for Top-N hypotheses (N=20). (The length difference must not be greater than two words.) (2) Model C+W: search the corpus and the web for additional N hypotheses by submitting the complete disfluent target sentence as-is to Google. (3) Model C+W+P: including partial matches against substrings of the disfluent target sentence, where 1~L-1 words in the disfluent sentence are successively deleted and then submitted as query strings to the search engine. (L: number of words in disfluent sentence) (4) Model C+W+Q: adjacent words in the deleted disfluent sentence are quoted as a single query token before submission so that the search engine will match more exactly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search-Based Decoding for Fluency",
                "sec_num": "2.3"
            },
            {
                "text": "Even with such a heuristic search, a substantial number of fluent sentences similar to the disfluent sentences can be found for re-ranking and local editing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search-Based Decoding for Fluency",
                "sec_num": "2.3"
            },
            {
                "text": "If exact translation is found during searching, the searching process itself is exactly a perfect translation process. If highly similar sentences are found, simple lexical substitution or automatic post-editing [9, 11] might patch the searched fluent sentences into correct translations. Some previous works for automatic post editing have been restricted to special function words, such as the English article ' the/a' [9, 10] , the Japanese case markers and Chinese classifier or particle ' de' [18] . The automatic post-editing model here is intended to resolve general editing errors that are frequently made by a machine translation system. Briefly, the best sentence eb E * in the searched candidates will be output as the translation of the disfluent translation E'if the translation score associated with the SPE model is higher than a threshold. (The set of candidate translation sentences is called its example base, thus the subscript ' eb' .) Otherwise, the automatic local editing algorithm will find the weakest phrase alignments and fix them one-by-one to maximize the translation score.",
                "cite_spans": [
                    {
                        "start": 212,
                        "end": 215,
                        "text": "[9,",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 216,
                        "end": 219,
                        "text": "11]",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 421,
                        "end": 424,
                        "text": "[9,",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 425,
                        "end": 428,
                        "text": "10]",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 498,
                        "end": 502,
                        "text": "[18]",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Local Editing",
                "sec_num": "2.4"
            },
            {
                "text": "An alignment phrase pair <Ep' , Ep> is said to be \" weak\"if its local alignment score Pr(Ep' |Ep) x Pr(Ep) is small and thus contributes little to the global translation score for the sentence pair <E' , E>. When the weakest pair, (Ep' -| Ep-) with the lowest local alignment score is identified, we should try to replace Ep-, the \" most questionable phrase\"in the fluent (yet incorrect) example sentence E, with some candidates that would make the patched example sentence more likely to be the translation of E' .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Local Editing",
                "sec_num": "2.4"
            },
            {
                "text": "There are some reasons why the alignment (Ep' -| Ep-) is the weakest. First of all, Epmight not be the right phrase, and should be replaced by Ep' -to make the fluent sentence E also the correct translation of E' . Second, Ep' -might not be the correct translation of some source phrase. In this case, the most likely translation(s) of Ep' -, called Ep+, should be used to replace Ep-. Third, Ep-is a more appropriate phrase than Ep+. In this case, it should be retained and next weakest alignment pair be repaired.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Local Editing",
                "sec_num": "2.4"
            },
            {
                "text": "As a result, potential candidates for replacing Ep-will include Ep' -, Ep+ and Ep-itself. The best substitution will be the phrase that maximizes Pr(Ep' |Ep) x Pr(Ep). Actually, many phrases in the PEB can be a more fluent version of Ep' -. Currently, the 20 best matches will play the role of Ep+ during local editing. And the local editing algorithm will successively edit weaker alignments until the (monotonically increasing) translation score is above some threshold. The algorithm is outlined as follows. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Local Editing",
                "sec_num": "2.4"
            },
            {
                "text": "Note that, local editing is applied only to a local region of the example sentence based on the disfluent sentence. Intuitively, those sentences searched from a text corpus or from the Web corpus will be much more fluent than stochastically combined sentences from the SMT decoding module. Even if local editing is required, the repair will be quite local. The search space for repairing will be significantly constrained by words in the most likely example sentence. Such a searching and local editing combination can thus be regarded as a constrained decoding. The searching error can thus be reduced significantly in comparison with the large search space of the decoding process of a typical SMT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constrained Decoding",
                "sec_num": null
            },
            {
                "text": "The TM parameters can actually be trained from an E'-to-E monolingual Machine Translation System, where E' can be derived by applying to E some commonly found editing operations in the SMT translation process. The operations might include the insertion of target specific lexicon, deletion of source specific lexicon, local reordering of words and substitution of lexical items.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "In the current work, we apply three kinds of editing operations to the fluent sentences in a monolingual corpus to simulate frequently found errors in an MT system. The fluent and its disfluent versions are then phrase segmented so that the sentences are represented by phrase tokens (instead of word tokens). Such fluent-disfluent (E-E' ) target sentence pairs are then trained using the GIZA++ alignment tools [12, 13, 14, 15] . Upon convergence, the translation model between the sentences to be post-edited and their correct translation can readily be acquired.",
                "cite_spans": [
                    {
                        "start": 412,
                        "end": 416,
                        "text": "[12,",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 417,
                        "end": 420,
                        "text": "13,",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 421,
                        "end": 424,
                        "text": "14,",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 425,
                        "end": 428,
                        "text": "15]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "The three editing operations include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "(1) Insertion: The insertion errors will occur when an MT system translates a source word into a target word while it should not be translated. For instance, the English infinitive \" to\"need not be translated into any Chinese word most of the time. But the bilingual dictionary may indicate the possibility to translate it into \" \u53bb \"(chu). We therefore automatically insert the Chinese words to simulate such an error.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "(2) Deletion: The deletion error occurs when a target specific word is not generated in the translation. For instance, the Chinese classifiers have no correspondence in the English language. We therefore delete the following classifiers from fluent Chinese sentences to create instances with deletion errors:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "' \u500b' , ' \u96bb' , ' \u679d' , ' \u4f4d' , ' \u9846' , ' \u68f5' .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "(3) Substitution: When a translation system chooses a wrong lexical item, a typical substitution error will occur. To simulate the substitution errors, Chinese words in the fluent sentences are lookup against an English-Chinese dictionary. Chinese words that are also the translation of the English word are then substituted to simulate the substitution error. For instance, ' \u554f\u984c' is a Chinese translation for the English word ' problem' . But ' problem' also has other translations, like ' \u7fd2\u984c' and ' \u7591\u96e3' . These words are therefore used to simulate the substitution errors. In our simulation, the top-30 most frequently used Chinese words are adopted to simulate the substitution errors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "With disfluent sentences created from fluent sentences with the above frequently encountered translation errors, an automatic statistical post-editing model can readily be trained using state-of-the-art alignment tools.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Faulty Sentences",
                "sec_num": "2.5"
            },
            {
                "text": "To see the performance of the current SMT-based SPE model, about 300,000 word segmented Chinese sentences from the Academia Sinica [6] was used as our target sentence corpus. The corpus has about 2,450,000 word tokens, and the vocabulary size is about 83,000 word types. 10% of the sentences are used as the test set and 90% are used for training. The 3 types of errors are applied to the testing sentences independently. For each error type, 100 sentences are randomly selected for evaluating automatic post editing.",
                "cite_spans": [
                    {
                        "start": 131,
                        "end": 134,
                        "text": "[6]",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "3"
            },
            {
                "text": "The performance is evaluated in terms of two criteria. The first criterion is the number (percentage) of fully corrected disfluent sentences from the test set. By fully corrected, we mean that the sentence corrected by the statistical post editing (SPE) system is completely the same as its original fluent version. translation. With the SPE, the local editing algorithm tries to maximize the translation score for each local editing. It therefore improves the translation fluency incrementally. Since the TM can be trained from an automatically generated fluent-disfluent parallel corpus, training such a system is easy. The evaluation shows that, on average, 46% of translation errors can be fully recovered, and the BLEU score can be improved by about 26%. The absolute BLEU is also high with the search-based decoding process in comparison with conventional decoding process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "3"
            }
        ],
        "back_matter": [
            {
                "text": "Note that, even with the very simple minded searching method, the SPE was able to correct, on average, about 48% of the faulty sentences to their fluent version if the search space is sufficiently large (with the C+W+Q searching model). The performance increases with the search space. And the performance is increased at most by 62%, 121% and 17.5 %, respectively for the substitution, deletion and insertion errors when the Web corpus is included to the search space. Obviously, the substitution is the hardest to resolve while insertion error seems to be easier to resolve.The second evaluation criterion is the improvement in the BLUE score with respect to the un-corrected test sentence. Table 2 shows the BLEU scores for the various searching models. The first column labeled as E' (ts) lists the BLEU scores for the test sentences that has not been post-edited. By searching for fluent translation and applying local editing, the BLEU scores are improved with increasing search space. The best performance is to increase the BLEU scores by 15%, 38% and 26% respectively for the three types of errors. On average, the improvement is about 26%, which is substantial. On the other hand, the absolute changes are 9. Note that, with search-based decoding, the absolute BLEU scores are much higher than automatic post editing systems that simply cascade a classical SMT module to the output of an MT/SMT [20, 21, 8] . Although the experiment settings are not the same and thus cannot be compared directly, the results to have higher absolute BLEU scores can be expected since searched sentences are almost always fluent, whether they are post-edited or not.Obviously, with the same training corpus, the search space and the searching method play important roles in improving the performance. The inclusion of the web corpus does improve the performance significantly. It was reported in [19] that well formulated query strings can effectively improve searching accuracy. Therefore, by using better searching strategy, part of the translation problems for fluent translation might be resolved as a searching and automatic post-editing problems. Currently, a statistical searching model specific for the fluency-based decoding is being developed.",
                "cite_spans": [
                    {
                        "start": 1405,
                        "end": 1409,
                        "text": "[20,",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 1410,
                        "end": 1413,
                        "text": "21,",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1414,
                        "end": 1416,
                        "text": "8]",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1888,
                        "end": 1892,
                        "text": "[19]",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 693,
                        "end": 700,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "annex",
                "sec_num": null
            },
            {
                "text": "In this paper, we propose not to generate sentence hypotheses for APE systems by using conventional SMT decoding process, since such a decoding process tends to lead to an openended search space. It is not easy to generate fluent sentence hypotheses under such circumstances due to the large search error. We propose to search sentence hypotheses, from a large target text corpus or from the web, based on the words in the disfluent translations, since the potential candidates will mostly be fluent. A statistical post-editing model is also proposed to re-rank the searched sentences, and a local editing algorithm is proposed to automatically recover the translation errors when the searched sentence is not a good",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Concluding Remarks",
                "sec_num": "4"
            }
        ],
        "bib_entries": {
            "BIBREF1": {
                "ref_id": "b1",
                "title": "T h e mathematics of statistical machine translation: Parameter e s t i ma t i o n",
                "authors": [
                    {
                        "first": "Peter",
                        "middle": [
                            "F"
                        ],
                        "last": "Brown",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "Della"
                        ],
                        "last": "Stephen",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [
                            "J"
                        ],
                        "last": "Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "L"
                        ],
                        "last": "Della Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mercer",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computational Linguistics",
                "volume": "19",
                "issue": "2",
                "pages": "263--311",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer, \" T h e mathematics of statistical machine translation: Parameter e s t i ma t i o n . \" Computational Linguistics, 19(2):263-311, 1993.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A Chinese-to-Chinese Statistical Machine Translation Model for Mining Synonymous Simplified-T r a d i t i o n a l C h i n e s e T e r ms",
                "authors": [
                    {
                        "first": "Jing-Shin",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Chun-Ka I",
                        "middle": [],
                        "last": "Ku N G",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of Machine Translation Summit XI",
                "volume": "",
                "issue": "",
                "pages": "10--14",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chang, Jing-Shin and Chun-Ka i Ku n g , \" A Chinese-to-Chinese Statistical Machine Translation Model for Mining Synonymous Simplified-T r a d i t i o n a l C h i n e s e T e r ms , \" Proceedings of Machine Translation Summit XI, pages 81-88, Copenhagen, Denmark, 10-14, September, 2007.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Statistical Models for Word Segmentation and Unknown Word Resolution",
                "authors": [
                    {
                        "first": "Tung-Hui",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    },
                    {
                        "first": "Jing-Shin",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Ming-Yu",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Keh-Yih",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of ROCLING-V",
                "volume": "",
                "issue": "",
                "pages": "123--146",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chiang, Tung-Hui, Jing-Shin Chang, Ming-Yu Lin and Keh-Yih Su, \" Statistical Models for Word Segmentation and Unknown Word Resolution,\"Proceedings of ROCLING-V, pp. 123-146, Taipei, Taiwan, R.O.C., 1992.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A Hi e r a r c h i c a l P h r a s e -Based Model for Statistical Machine Translation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. ACL-2005",
                "volume": "",
                "issue": "",
                "pages": "263--270",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chiang, David, \" A Hi e r a r c h i c a l P h r a s e -Based Model for Statistical Machine Translation,\" Proc. ACL-2005, pages 263-270, 2005.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Academia Sinica Word Segmentation Corpus, ASWSC-2001, (\u4e2d\u7814\u9662\u4e2d\u6587 \u5206 \u8a5e \u8a9e\u6599\u5eab )",
                "authors": [],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "CKIP 2001, Academia Sinica Word Segmentation Corpus, ASWSC-2001, (\u4e2d\u7814\u9662\u4e2d\u6587 \u5206 \u8a5e \u8a9e\u6599\u5eab ), Chinese Knowledge Information Processing Group, Acdemia Sinica, Tiapei, Taiwan, ROC, 2001.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "S t a t i s t i c a l P o s t -Editing on SYSTRANS's Rule-Based Translation System",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Dugast",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Senellart",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Second Workshop on Statistical Machine Translation, 2nd WSMT",
                "volume": "",
                "issue": "",
                "pages": "220--223",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dugast, L., J. Senellart, P. Koehn, \" S t a t i s t i c a l P o s t -Editing on SYSTRANS's Rule-Based Translation System,\" Proceedings of the Second Workshop on Statistical Machine Translation, 2nd WSMT, pp. 220-223, Prague, Czech Republic, June 2007.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Do ma h Automatic Post-Editing",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Isabelle",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Goutter",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Simard",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of MT Summit XI",
                "volume": "",
                "issue": "",
                "pages": "10--14",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Isabelle, P., G. Goutter, M. Simard, \" Do ma h Automatic Post-Editing,\"Proceedings of MT Summit XI, pp. 255-261, Copenhagen, Denmark, 10-14 Sept. 2007.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Au t o ma t e d Post-Editing of Documents",
                "authors": [
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Ishwar",
                        "middle": [],
                        "last": "Chander",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the Twelfth National Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "779--784",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Knight, Kevin, and Ishwar Chander, \" Au t o ma t e d Post-Editing of Documents,\"in Proceedings of the Twelfth National Conference on Artificial Intelligence, pp. 779-784, CA, USA, 1994.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Automatic Article Restoratio n",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. HLT-NAACL 2004 Student Research Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lee, J., \" Automatic Article Restoratio n , \"in Proc. HLT-NAACL 2004 Student Research Workshop, Boston, MA, 195-200, May, 2004.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Au t o ma t i n g P o s t -Editing to Improve MT S y s t e ms",
                "authors": [
                    {
                        "first": "Ariadna",
                        "middle": [],
                        "last": "Llitj\u00f3s",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Font\u00f3s",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "AMTA",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Llitj\u00f3s, Ariadna Font\u00f3s, a n d J a i me C a r b o n e l l , \" Au t o ma t i n g P o s t -Editing to Improve MT S y s t e ms , \" i n Automated Post-Editing Workshop, AMTA, Boston, USA, August 12, 2006.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Improved Alignment Models for Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Franz",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Christoph",
                        "middle": [],
                        "last": "Josef",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Tillmann",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proc. EMNLP/WVLC",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Och, Franz Josef, Christoph Tillmann, and Hermann Ney, \" Improved Alignment Models for Statistical Machine Translation,\" in Proc. EMNLP/WVLC, 1999.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "I n Proc. COLING ' 0 0 : T h e 1 8 t h I n t",
                "authors": [
                    {
                        "first": "Franz",
                        "middle": [
                            "Josef"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "1086--1090",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Och, Franz Josef and Hermann Ney, \" Acomparison of alignment models for statistical ma c h i n e t r a n s l a t i o n . \" I n Proc. COLING ' 0 0 : T h e 1 8 t h I n t e r n a t i o n a l C o n f e r e n c eon Computational Linguistics, pages 1086-1090, Saarbr\u00fccken, Germany, August, 2000.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "I mp s",
                "authors": [
                    {
                        "first": "Franz",
                        "middle": [
                            "Josef"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "I n Proceedings of the 38th Annual Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "440--447",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Och, Franz Josef and Hermann Ney, \" I mp s . \" I n Proceedings of the 38th Annual Meeting of the ACL, pages 440-447, 2000.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "BLEU: a method for automatic evaluation of machine translation",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [
                            "J"
                        ],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of ACL-2002, 40th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Papineni, K., S. Roukos, T. Ward, and W. J. Zhu, \" BLEU: a method for automatic evaluation of machine translation,\"In Proceedings of ACL-2002, 40th Annual Meeting of the Association for Computational Linguistics pp. 311-318, 2002.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "T h e MI T -LL/AFRL IWSLT-2006 MT System",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wa D E",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. of the International Workshop on Spoken Language Translation (IWSLT)",
                "volume": "",
                "issue": "",
                "pages": "71--76",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shen, Wa d e , B n , \" T h e MI T -LL/AFRL IWSLT-2006 MT System,\" Proc. of the International Workshop on Spoken Language Translation (IWSLT) 2006, pp. 71-76, Kyoto, Japan, 27 November 2006.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Using Phrase Structure and Fluency to Improve Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Min",
                        "middle": [
                            "-"
                        ],
                        "last": "Shia",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shiang",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shia, Min-Shiang, Using Phrase Structure and Fluency to Improve Statistical Machine Translation, Master Thesis, Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan, ROC, June, 2006.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Augmentation Model for Answering Well-Defined Questions",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shih",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shu-Fan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Query",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shih, Shu-Fan, A Query Augmentation Model for Answering Well-Defined Questions, Master Thesis, Department of Computer Science and Information Engineering, National Chi Nan University, Taiwan, ROC, July, 2007.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Proceedings of NAACL-HLT 2007",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Simard",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Goutter",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Isabelle",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "508--515",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Simard, M., G. Goutter, P. Isabelle, \" g \" . Proceedings of NAACL-HLT 2007, pp. 508-515, Rochester, NY, April 2007.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "R u l e -Based Translation with Statistical Phrase-Based Post-E d i t i n g",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Simard",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Ueffing",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Isabelle",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Kuhn",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Second Workshop on Statistical Machine Translation, 2nd WSMT",
                "volume": "",
                "issue": "",
                "pages": "203--206",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Simard, M., N. Ueffing, P. Isabelle, R. Kuhn, \" R u l e -Based Translation with Statistical Phrase-Based Post-E d i t i n g \" . Proceedings of the Second Workshop on Statistical Machine Translation, 2nd WSMT, pp. 203-206, Prague, Czech Republic, June 2007.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Proceedings of IEEE International Conference on Systems, Man & Cybernetics (SMCC2004)",
                "authors": [
                    {
                        "first": "Yu",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Chengqing",
                        "middle": [],
                        "last": "Zong",
                        "suffix": ""
                    },
                    {
                        "first": "Bo",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhou, Yu, Chengqing Zong, and Bo Xu, \" B i l i n g u a l C h u n k Al i g n me n t i n S t a t i s t i c a l Ma c h i n e T r a n s l a t i o n , \"In Proceedings of IEEE International Conference on Systems, Man & Cybernetics (SMCC2004), Hague, Netherlands, 2004.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF2": {
                "text": "Input\uff1a E\uf0a2 and eb E Find the weakest alignment entry in E\uf0a2 from the < E\uf0a2 , eb E * > alignment.",
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td/><td/><td>Ep</td><td>'</td><td colspan=\"3\">' ' arg min Pr Ep E \uf0ce \uf02d\uf03d</td><td>\uf028 Ep Ep ' |</td><td>\uf029 Pr( ) Ep</td></tr><tr><td colspan=\"2\">Step 2\uff1a Identify</td><td colspan=\"5\">\uf02d Ep that is the phrase in eb E * aligned with \uf028 \uf029 \uf02d \uf03d \uf02d ' align Ep Ep</td><td>\uf02d Ep . '</td></tr><tr><td colspan=\"7\">Step 3\uff1aFind the fluent phrase Ep \uf02bof \uf03d \uf02b PEB Ep ' Ep \uf02dfrom PEB . \uf028 \uf029 \uf02d ' Ep</td></tr><tr><td colspan=\"7\">Step 4\uff1aSelect the best substitution among Ep' -, Ep+ and Ep-which maximize the</td></tr><tr><td colspan=\"5\">translation score: * ps E E \uf03d</td><td>ps</td><td>\uf07b arg max Pr ' | Pr( ) \uf07d \uf028 \uf029 ' , , Ep Ep Ep E E E \uf0ce \uf02d \uf02b \uf02d</td></tr><tr><td>Step 5\uff1aCut</td><td colspan=\"6\">\uf02d Ep from eb E * eb E E \uf03d</td><td>eb</td><td>( \uf02d \uf02d\uf02b ) ( Ep</td><td>Eps</td><td>)</td></tr><tr><td colspan=\"7\">(Repeat until the translation score Pr(E' | E ) xPr(E) reaches some threshold.)</td></tr></table>",
                "html": null
            },
            "TABREF3": {
                "text": "indicates the performance in terms of the error correction capability.",
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>Error types</td><td>C</td><td colspan=\"3\">Searching Models C+W C+W+P C+W+Q</td></tr><tr><td>Substitution</td><td>21</td><td>23</td><td>32</td><td>34</td></tr><tr><td>Deletion</td><td>28</td><td>39</td><td>46</td><td>62</td></tr><tr><td>Insertion</td><td>40</td><td>43</td><td>47</td><td>47</td></tr><tr><td>Average</td><td>30</td><td>35</td><td>42</td><td>48</td></tr><tr><td colspan=\"4\">Table 1. Number of fully corrected sentences with</td><td/></tr><tr><td colspan=\"4\">different searching models (N=100)</td><td/></tr></table>",
                "html": null
            }
        }
    }
}