File size: 69,791 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
{
    "paper_id": "O04-1005",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:00:20.637034Z"
    },
    "title": "A Three-Phase System for Chinese Named Entity Recognition",
    "authors": [
        {
            "first": "Conrad",
            "middle": [],
            "last": "Chen",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Chiao Tung University",
                "location": {
                    "settlement": "Hsinchu"
                }
            },
            "email": "drchen@csie.nctu.edu.tw"
        },
        {
            "first": "Hsi-Jian",
            "middle": [],
            "last": "Lee",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Tzu Chi University",
                "location": {
                    "settlement": "Hualien"
                }
            },
            "email": "hjlee@mail.tcu.edu.tw"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The handling of out-of-vocabulary (OOV) words is one of the key points to a high performance lexical analysis in natural language processing. Among all OOV words, named entities (NE) are the most productive ones. They generally constitute the most meaningful parts of sentences (persons, affairs, time, places, and objects). In this paper, we propose a three-phase \"generation, filtering, and recovery\" system to address the NER problem. A set of stochastic models is first used to generate all possible NE candidates. Then we treat candidate filtering as an ambiguity resolution problem. To resolve ambiguities, we adopt a maximal-matching-rule-driven lexical analyzer. Last, a pattern matching method is applied to detect and recover abnormalities in the results of the previous two phases. Pure lexical information is exploited in our system. We get a high recall of 96% with personal names (PER), satisfiable recall of 88%, 89%, and 80% with transliteration names (TRA), location names (LOC), and organization names (ORG), respectively. The overall precision and excluding rate is over 90% and 99%.",
    "pdf_parse": {
        "paper_id": "O04-1005",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The handling of out-of-vocabulary (OOV) words is one of the key points to a high performance lexical analysis in natural language processing. Among all OOV words, named entities (NE) are the most productive ones. They generally constitute the most meaningful parts of sentences (persons, affairs, time, places, and objects). In this paper, we propose a three-phase \"generation, filtering, and recovery\" system to address the NER problem. A set of stochastic models is first used to generate all possible NE candidates. Then we treat candidate filtering as an ambiguity resolution problem. To resolve ambiguities, we adopt a maximal-matching-rule-driven lexical analyzer. Last, a pattern matching method is applied to detect and recover abnormalities in the results of the previous two phases. Pure lexical information is exploited in our system. We get a high recall of 96% with personal names (PER), satisfiable recall of 88%, 89%, and 80% with transliteration names (TRA), location names (LOC), and organization names (ORG), respectively. The overall precision and excluding rate is over 90% and 99%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Words are generally the basic unit to process natural languages. However, in Chinese, sentences are composed of string of characters without any delimiters to mark word boundaries. To process Chinese, sentences must be segmented into word sequences first. Most Chinese language processing systems rely on lexicons to recognize words in sentences. Because the number of Chinese words is tremendous, it is impossible to compile all words in a lexicon. Therefore, word segmentation processes often encounters the problem of out-of-vocabulary (OOV) words. Among all OOV words, named entities are one of the most important sorts. It is impossible to list them exhaustively in a lexicon. They are the most productive type of words. Nearly no simple or unified generation rules for them exist. Besides, they are usually keywords in documents. Named entity recognition (NER) thus becomes a major task to many natural language applications, such as natural language understanding, question answering, and information retrieval. Many researches have addressed the NE recognition problem in Chinese since 1990. Most of them focused on some specific types as personal names [5] [13] , location names [9] , organization names [10] , and transliteration names [11] . There are also type-independent approaches of NER. However, most of these approaches need type-dependent data such as role tags. Type-independent approaches can be roughly divided into two major sorts: over-generating & disambiguating [3] [12] and over-segmenting & generating [4] [8] . Generally speaking, there are two main approaches of the above studies, rule-based models and machine learning methods. Rule-based approaches could effectively exploit human knowledge and can be tuned conveniently. On the other hand, machine learning approaches, such as maximum entropy or support vector machine, is more independent from languages and simple to implement. Rule-based approaches is slightly outperform machine learning ones in MUC-7 tests [2] . In our consideration, rule-based approaches are more reasonable than machine learning ones. Boosting performances of rule-based approaches is easier than improving machine learning abilities. Therefore, rule-based approaches is adopted in this paper, while machine learning methods still could be incorporate in our system under the present framework in future. A three-phase \"generation, filtering, and recovery\" system is proposed to solve NER problem. In the generation phase, stochastic models are responsible for generating all possible candidates of different kinds of named entities in input documents. In the filtering phase, we treat the filtering of false candidates as an ambiguity resolution problem. A maximal-matching-rule-driven lexical analysis is performed to resolve ambiguities caused by false candidates. In the recovery phase, a rule-driven pattern matching method is applied to detect and recover abnormalities in the results of the previous two phases.",
                "cite_spans": [
                    {
                        "start": 1162,
                        "end": 1165,
                        "text": "[5]",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 1166,
                        "end": 1170,
                        "text": "[13]",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 1188,
                        "end": 1191,
                        "text": "[9]",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1213,
                        "end": 1217,
                        "text": "[10]",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1246,
                        "end": 1250,
                        "text": "[11]",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 1488,
                        "end": 1491,
                        "text": "[3]",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1492,
                        "end": 1496,
                        "text": "[12]",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1530,
                        "end": 1533,
                        "text": "[4]",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1534,
                        "end": 1537,
                        "text": "[8]",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1996,
                        "end": 1999,
                        "text": "[2]",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "In our system, we try to make use of both the tunability of stochastic models in candidate extraction and the power of lexical analyzers in disambiguation. To implement this idea, we propose a three-phase framework: candidate generation, filtering, and recovery, as shown in Figure 2 .1: In the first phase, all possible candidates of various kinds of named entities in the input document are extracted. Notice that this process is inevitably both over-generating and under-generating. Because of the filtering process, the candidate extracting can be tuned to have a higher recall and to sacrifice precision a little for a moment. Statistical approaches are adopted in the candidate generation phase. The reason is that names are given by people. Therefore, there is no exact answer if a string is a name or not. The only thing can be judged is how likely the string is to be a name. As for computers, to estimate the likelihood of names is basically a fuzzy problem. If a character is more likely to appear in a name, it has a better fuzzy value. The detail of how fuzzy logic and statistic estimation are applied will be discussed later.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 275,
                        "end": 283,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "System Overview",
                "sec_num": "2."
            },
            {
                "text": "The second phase of the system is false candidate filtering. How do we verify which candidates are true named entities and which ones are false? False candidates are either a common word or composed of fragments of common words and named entities. The first case has less impact on subsequent applications. The second case usually results ambiguous segmentations. Verification of these candidates could be viewed as an ambiguity resolution problem. If we can judge which segmentation is correct or more proper, we could also verify which candidates are true named entities. Because of the regularity of lexical choices in modern Chinese, many simple approaches of segmentation ambiguity resolution have good performances. No matter what simple methods it takes, heuristic rules or stochastic estimations, if there are no OOV words, most lexical analysis methods show great precision in ambiguity resolution. That is to say, if we got a high recall in the extraction of NE candidates, most of the segmentation ambiguities caused by false candidates are supposed to be resolved by conventional word segmentation methods. We choose a heuristic approach, which is mainly driven by maximal matching rules, to resolve segmentation ambiguities. The third phase of the system is recovery. The recovery mechanism is used to revive some obviously incorrect results of the first two phases. There are two major target types to be recovered: over-segmentations caused by under-generation and under-segmentations caused by over-generation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Overview",
                "sec_num": "2."
            },
            {
                "text": "Through the detection of these anomalies, e.g. a succession of single-character words indicating over-segmentations, part of un-extracted named entities could be revived.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Overview",
                "sec_num": "2."
            },
            {
                "text": "The candidate generator is used to extract all possible named entity candidates in input documents. There are four layers in the candidate generator to handle four sorts of NEs: close-ended NEs, genuine names, whole named entities, and abbreviations. Close-ended named entities comprise time and quantity expressions. Since the extraction of close-ended NEs is not the focus of this paper, and previous researches [6] have solved this problem well, a single simplified rule is applied to recognize most of them in our system. The rule is as follows:",
                "cite_spans": [
                    {
                        "start": 414,
                        "end": 417,
                        "text": "[6]",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "[ For example, \" \" is a genuine name and \" \" is a whole named entity with suffix \" \" indicating that \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "\" is a city. The handling of prefixes is much similar to that of suffixes, and on the other hand prefixes are much more rarely seen than suffixes. Therefore, for simple implementation, whole NEs with prefixes would not be recognized in our system. Suffixes generally indicate the type of named entities. There are many types of named entities with different suffixes. Many sorts of them rarely appear in the document. It is not worth to build models for each type of these names. However, suffixes are strong features. It is easier to recognize them, and chances of error recognition are comparatively low. Therefore, a compromised method is adopted that only models for four kinds of genuine names are implemented at present in our system. They are personal names, transliteration names, location names, and organization names. These four kinds of genuine name candidates would be used to form various types of NEs with corresponding suffixes. For instance, if a personal name candidate is followed by a publication suffix, they will be recognized as a whole publication name, like:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "\" \"(personal name) + \" \"(publication suffix) \" \"(publication name)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "For the same reason above, all NE suffixes are roughly classified into three categories: ones with similar corresponding genuine name types to location suffixes, ones with similar corresponding genuine name types to organization suffixes, and others. The first category covers all location names, racial names, etc. The second one comprises all organization names except for racial names, facility names, publication names, etc. The third one includes feat names, culture names, and so on. Among these three categories, only the first two are addressed by our system. These two categories are called \"location-like NE\" and \"organization-like NE\". Names belonging to the same category will be addressed by the same corresponding model. There are two main advantages following this way. First, times spent on designing models and collecting data are saved. Second, confidences brought by suffixes could alleviate the deviation on statistics brought by a compromised approach. The extraction of genuine names and whole named entities will be detailed later. Open-ended named entities extracted above are used to find possible abbreviations and some rule-recognizable aliases in the abbreviation generation model. Four simple rules are adopted to complete this job: Rule 1: Take the first characters of genuine name and all suffixes other than typing suffix, and the last character of typing suffix from NE candidates (e.g. \" \" \" \") Rule 2: Surnames of personal name candidates (e.g. \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "\" \" \") Rule 3: Given names of personal names (e.g. \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "\" \" \") Rule 4: Modifier + Surname or any character of Given names (e.g. \" \" \" \", \" \", \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "\", etc.) Notice that only abbreviations and aliases with original names appearing in the document could be addressed by our system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Candidate Generation",
                "sec_num": "3."
            },
            {
                "text": "The recognition of genuine names is basically a fuzzy decision problem to computers. There is no exact right or wrong answer for a string to be a name. The only problem is how likely it is. Fuzzy values represent strings' likelihood or properness to be a name. Since Chinese is a character-based language, methods of estimating fuzzy values are generally also character-based. Names are composed of several characters. There are several ways to transform the member characters' fuzzy value to the string's fuzzy value. Stochastic language models are usually adopted to estimate the likelihood of a candidate to be a named entity. The fundamental principle is that the string with a higher probability or frequency to be a name has a higher fuzzy value or likelihood. There are several ways to estimate the fuzzy value of a string from the statistic data based on characters. These models include Markov models, bi-gram models, unigram models, etc. Each model has its advantages and disadvantages. Generally speaking, more complex the model is, more precisely it estimate, and more training data it needs. Besides that, the data-sparseness problem is more likely to happen. Since the amounts of features of different types of named entities are varied, each type has its own best-fit model. In this paper, to simplify data collecting and training, unigram models are adopted. Additionally, some supplementary information such as positional feature is exploited to support statistical models. Generally there are two major ways to estimate fuzzy values of a single character:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Frequency: freq(typ|c)=counts(typ, c) Probability: prob(typ|c)=counts(typ, c)/counts(c)=freq(typ|c)/counts(c) Frequencies stand for differences among naming-characters. They represent popularities of characters to be used in names of some type. If some character is used in more names, it has a higher frequency. If frequencies are used as fuzzy values, a higher recall will be obtained with common names. Probabilities stand for differences among all characters. They represent possibilities of characters to be used in a name of some type. If some character appears more frequently in names than in common words, it has a higher probability. If probabilities are used as fuzzy values, a higher precision and a higher recall will be obtained with rare names. However, it has a lower recall with common names comparing with using frequencies. A hybrid statistics is adopted in our system to take advantages of both frequencies and probabilities. With common naming-characters, frequencies are adopted to get a higher recall with common names. With rare naming-characters, probabilities are adopted to complement frequencies' insufficiency with rare names. The resulting model looks like:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "(typ|c) = Max{ freq(typ|c), prob(typ|c) }",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Data sparseness and reappearances of names make it hard to estimate probabilities. To overcome these difficulties, we propose to use inverse common frequencies to approximate probabilities:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "icf(c)=1/(freq(common word|c)+1)=1/(counts(common word, c)+1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Since probabilities are mainly used to estimate the probability of rarely seen events, usually: (common word, c) is in direct proportion to the number of lexicon entries in which the character c appears. Under these assumptions, we use inverse lexicon counts to approximate probabilities:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 96,
                        "end": 112,
                        "text": "(common word, c)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "counts(common",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "ilc(c) = 1/(Num_of_Lex_Entries(c)+1) icf(c) prob(typ|c)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Because ilc(c) is ranged from 0 to 1, freq(typ|c) also needs to be normalized to 0 to 1. The distribution of raw data of freq(typ|c) is conformed to Zipf's Law, that: P n 1/n a , where P n is the frequency of occurrence of the n th ranked item and a is close to 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Values with often seen characters are too high and the distinctions among low frequency characters are not wide enough. Therefore, a logarithm function is taken on the raw data to smooth the distribution curve, and then the result is normalized to 0.1 to 1. Notice that the lower bound of freq*(typ|c) is set to 0.1, not 0. This is because the meaning of events that appear once is greatly different from the meaning of unseen events.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "The final character likelihood model looks like: (typ|c) = Max{ freq*(typ|c), ilc(c) }",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Notice that there are two exceptions to this model. With surnames and transliterating characters, likelihoods of unseen events in training data are assigned to zero. This is because generally surnames and transliterating characters are not arbitrarily given. Probabilities of most characters to be surnames or transliterating characters are actually zero. The original model might cause unnecessary over-generation. To prevent this problem, only surnames and transliterating characters appearing in our training data are adopted as possible ones.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistic Estimation",
                "sec_num": "3.1."
            },
            {
                "text": "Open-ended named entity extraction models would estimate likelihoods of strings to be some type of named entity from character likelihoods. Unigram models are adopted as the basis of our models. They could be represented as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Open-ended Named Entity Extraction",
                "sec_num": "3.2."
            },
            {
                "text": "*(typ|g s) = (typ|g) ConRe(g) ConSuf(typ, s) where g denotes the genuine name and s denotes the suffix part If *(typ|g s) is over some pre-defined threshold, which is decided by maximizing the f-measure of the recall of training data and the excluding rate of lexicon entries, g s would be recognized as a possible candidate and added into the candidate pool. Each member of the formula is detailed below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Open-ended Named Entity Extraction",
                "sec_num": "3.2."
            },
            {
                "text": "ConRe(g) estimates the confidence could be brought by reoccurrences of the genuine name, which is defined as: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Open-ended Named Entity Extraction",
                "sec_num": "3.2."
            },
            {
                "text": "ConRe(g)= k",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Open-ended Named Entity Extraction",
                "sec_num": "3.2."
            },
            {
                "text": "Besides the above models, there are three supplementary mechanisms designed to relieve over-generation problems of stochastic models: 1. If some candidate is constituted of two multisyllabic words or one multisyllabic word and one often seen monosyllabic word, this candidate would be removed from the candidate pool. 2. If the first or the last character of some three-character-long organization name candidate is a monosyllabic word that often appears adjacent to a name, as \" \" and \" \", this candidate will be removed from the candidate pool. 3. With transliteration names, sometimes a common word might be wrongly attached by a transliteration candidate. In this situation, maximal-matching-rule-driven lexical analyzer cannot filter it out properly. A concept called \"team\" based on reoccurrences is introduced to solve the attaching problem. Basically, all substrings of possible transliteration name candidates are also possible candidates. Hence all transliteration name candidates can be grouped into teams according to their longest common superstring candidate. For example, a team can be represented as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "T leader= = { (5),",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": ",",
                        "eq_num": "(6)"
                    }
                ],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": ",",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": ",",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": ",",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "} Where all appearance times of candidates are marked up, and superstring \" \" is called the \"leader\" of the team. The following algorithm is then applied: I. Subtract leader's appearance times from each team member II. If the leader could be split into candidates with non-zero appearance times after subtraction and multisyllabic common words or frequently used monosyllabic words, discard the leader and members whose appearance times being subtracted to zero III. Form new teams comprised of remaining candidates with new leaders IV. Repeat step I~III, until no candidates could be discarded",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Supplementary Mechanism",
                "sec_num": "3.3."
            },
            {
                "text": "The lexical analyzer is responsible for verifying candidates generated by the candidate generator. Heuristic rules are adopted to filter out false named entity candidates and resolve ambiguities caused by false candidates. There are six heuristic rules applied in order precedence: Rule 1: Tri-word maximal matching, which is proposed by Chen & Liu (1992) [1] . The rule follows below three steps: 1. From the segmenting point, look forward for all possible tri-word combinations. 2. Take the first word of the longest sequence of all, segment this word.",
                "cite_spans": [
                    {
                        "start": 338,
                        "end": 355,
                        "text": "Chen & Liu (1992)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 356,
                        "end": 359,
                        "text": "[1]",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "3. Move to the next segmenting point. For example, with the sentence \" \", \" \" would be picked instead of \" \" because \" \" is longer than \" \". Rule 2: Least number of NEs first, which would pick the tri-word sequence with the least number of named entities among all sequences of the same length. Rule 3: Most frequently appearing NEs first, which would pick the tri-word sequence with the most appearing times of component NEs in the input document. Rule 4: Words of even lengths first, which would choose the sequence with most words of even lengths. There are several exceptions to this rule. First, personal names, transliteration names, and numerical expressions are not concerned in this rule. Second, the often seen monosyllabic words, like \" \", \" \", \" \", etc., are viewed as words of even lengths instead. For example, \" \" is regarded as totally having two words of even lengths, one is \" \" and another one is \" \", not \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "\". Third, the suffix part of a whole named entity is not considered into the length of it. For example, \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "\" is viewed as a word of even lengths, not of odd ones. Rule 5: Often seen monosyllabic words first, which is also proposed by [1] , would pick the sequence with the most often seen monosyllabic words. Rule 6: Forward precedence, which would choose the tri-word sequence with longer forward words.",
                "cite_spans": [
                    {
                        "start": 127,
                        "end": 130,
                        "text": "[1]",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "For example, with two ambiguous tri-word sequence \" \" and \" \", the former would be picked since \" \" is longer than \" \". In order to measure the performance of our lexical analyzer on ambiguity resolution, the test samples of our system (61 news articles from United Daily News and Central News Agency, which will be further discussed later) are examined. The following measurements are adopted:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "Ambiguous Tri-Word Sequences: # of all possible tri-word sequences which could not be discriminated by the prior rules Resolved: # of tri-word sequences which could be filtered by the corresponding rule Errors: # of correct words which are wrongly filtered Applying Rate: Resolved / Ambiguous Tri-Word Sequences Accuracy: 1 -Errors / Resolved The experimental results are listed in Table 4 .1: Table 4 .1. The performance of heuristic rules in ambiguity resolution",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 382,
                        "end": 389,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 394,
                        "end": 401,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Lexical Analysis",
                "sec_num": "4."
            },
            {
                "text": "The recovery mechanism would revive obvious incorrect results of segmentations which are not suitable to be solved by priority-style rules. These anomalies mainly comprise two situations: oversegmentations caused by under-generation, and under-segmentations caused by over-generation. The segmentation checker would find suspect segmentation sequences and try to recover them. To deal with over-segmentations, sequences of three or more seldom used monosyllabic words in a row are suspected. These suspects are checked to see if any fragments of them could constitute NE candidates with (TYP|s) over a predefined suspect threshold of the corresponding type. For example, since (TRA|\" \") = 0.43 < 0.51, the candidate threshold of (TRA|s), the string is usually segmented to \" \" in the first two phases. This suspect sequence will be detected by the segmentation checker. Because (TRA|\" \") is larger than the suspect threshold of (TRA|s), which is set to 0.2 in our system, \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recovery",
                "sec_num": "5."
            },
            {
                "text": "\" is added into the candidate list of transliteration names. With personal names, there is another special case. Let us consider the personal name \" \". (PER|\" \") = 0.23 < 0.26, the candidate threshold of (PER|s). However, (PER|\" \"), which equals 0.54, is larger than the candidate threshold. When this situation happens, the personal name is usually incorrectly segmented into a personal name of two characters and a monosyllabic word, such as \" \" in this case. To cope with this situation, the following sequence is also viewed as suspects of over-segmentations:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recovery",
                "sec_num": "5."
            },
            {
                "text": "On the other hand, to deal with under-segmentations, segmentation sequences constituted of interlaced appearances of transliteration, location, organization names, and seldom used monosyllabic words, are suspected. These sequences are attempted to be re-segmented into a new sequence containing one more word than the original sequences. For example, if \" \" is incorrectly recognized as a location name, the phrase \" \" would be wrongly segmented into a suspect sequence \" \". This sequence would be detected and re-segmented into the right sequence \"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "two-character-long personal name candidate + seldom used monosyllabic word",
                "sec_num": null
            },
            {
                "text": "\". If the re-segmenting cannot be performed, the original sequence will be kept. The procedure of segmentation checker is as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "two-character-long personal name candidate + seldom used monosyllabic word",
                "sec_num": null
            },
            {
                "text": "1 ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "two-character-long personal name candidate + seldom used monosyllabic word",
                "sec_num": null
            },
            {
                "text": "To measure the performance of our system, a corpus which is balanced and well-tagged according to our standard is needed. The most popular standard test corpus, MET-2 data, is biased on some special topics and uses a different tagging standard from ours. Therefore, instead of a standard testing corpus, we obtain 61 articles from United Daily News and Central News Agency as our test bed. These articles are segmented and tagged by our system and corrected manually. These 61 articles are gathered from five different domains. They are politics, society, business, sports, and entertainment. Because the quantity of politics news and society news is more than others, we obtain three different sub-topics (lawsuit, government, and election) from politics news and two (crime and local) from society news. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "6."
            },
            {
                "text": "It stands for the percentage of non-NEs being correctly filtered by our system. Table 6 .2 shows the recall of different types of NE. Because we do not focus on automatic classification, one NE might be recognized by many different models, it's hard to judge the precision of each type and only the recalls are listed here. Notice that the first five columns (PER, TRA, LOC, ORG, ABB) only include the focused types of our system. Column PER comprise only formal Chinese personal names and personal names with appellations. Other personal names, such as Japanese name \" \" and pseudonym \" \", are counted in the column PO instead. Monosyllabic place names without suffixes, like \" \" and \" \", are recognized by lexicon matching and counted in the column LO. Government and team names are also recognized by lexicon. They are viewed as OO. All other location names and organization names are included in the column LOC and ORG respectively. Column ABB contains only abbreviations with original reference in the input document, other abbreviations are considered as AO.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 80,
                        "end": 87,
                        "text": "Table 6",
                        "ref_id": "TABREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Excluding rate = 1 -(# of False)/(# of Words -# of True)",
                "sec_num": null
            },
            {
                "text": "Overall speaking, pure lexical information is employed to recognize named entities in our system. Only statistical features and internal structures of NE are utilized. Our statistical model and heuristic rules are simplified for easy implementation. However, our system gets a satisfied performance, and there are still many rooms for improvement.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Future Works",
                "sec_num": "7."
            },
            {
                "text": "First, statistical models could be refined. More training data could be collected. More elaborate candidate generating models could be adopted, such as bi-gram models. More internal features could be exploited, such as positional information of characters. Contextual information, such as word probability of being adjacent to some type of NEs, could be also added into our model. Second, heuristic rules could be more completed or substituted by other mechanisms. Shortcomings of heuristic rules form an upper-bound barrier of performances. More rules could be introduced to cover the inadequacies of original ones. Other mechanism like statistical approaches could be used to replace rule-driven methods. Third, more candidate generating models could be added. Many types of NEs have not been addressed in our system. We could find that these NEs occupy a great proportion of true negative errors. If these NEs could be recognized, the recall of our system is supposed to be boosted. Fourth, more knowledge could be gathered and utilized. The suffix and appellation information used in our system is handcrafted at present. Bootstrapping or machine learning algorithm might help us automatically retrieve these kinds of information from the Internet or corpus. Part-of-speech tagging, syntactic checking and even semantic analysis might also be added into our future system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Future Works",
                "sec_num": "7."
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Word Identification for Mandarin Chinese Sentences",
                "authors": [
                    {
                        "first": "Keh-Jiann",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "H"
                        ],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of COLING-92",
                "volume": "1",
                "issue": "",
                "pages": "101--107",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen, Keh-Jiann and S. H. Liu, 1992, \"Word Identification for Mandarin Chinese Sentences,\" Proceedings of COLING-92, Vol. 1, pp. 101-107",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "MUC-7 Test Score Reports for all Participants and all Tasks",
                "authors": [
                    {
                        "first": "Nancy",
                        "middle": [],
                        "last": "Chinchor",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the MUC-7",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chinchor, Nancy, 1998, \"MUC-7 Test Score Reports for all Participants and all Tasks\" in Proceedings of the MUC-7.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Learning Pattern Rules for Chinese Named Entity Extraction",
                "authors": [
                    {
                        "first": "Tat-Seng",
                        "middle": [],
                        "last": "Chua",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of AAAI/IAAI 2002",
                "volume": "",
                "issue": "",
                "pages": "411--418",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chua, Tat-Seng and J. Liu, 2002, \"Learning Pattern Rules for Chinese Named Entity Extraction,\" Proceedings of AAAI/IAAI 2002, pp. 411-418",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Chinese Unknown Word Identification Using Character-based Tagging and Chunking",
                "authors": [
                    {
                        "first": "Chooi",
                        "middle": [],
                        "last": "Goh",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Ling",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Asahara",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Matsumoto",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "ACL-2003 Interractive Posters/Demo",
                "volume": "",
                "issue": "",
                "pages": "197--200",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Goh, Chooi Ling, M. Asahara, Y. Matsumoto, 2003, \"Chinese Unknown Word Identification Using Character-based Tagging and Chunking,\" ACL-2003 Interractive Posters/Demo, pp. 197-200",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Inverse Name Frequency Model and Rule Based Chinese Name Identification",
                "authors": [
                    {
                        "first": "Heng",
                        "middle": [],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [
                            "S"
                        ],
                        "last": "Luo",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Chinese) Natural Language Understanding and Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "123--128",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ji, Heng and Z. S. Luo, 2001, \"Inverse Name Frequency Model and Rule Based Chinese Name Identification,\" (In Chinese) Natural Language Understanding and Machine Translation, Tsinghua University Press, pp. 123-128.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Determinative-Measure Compounds in Mandarin Chinese Formation Rules and Parser Implementation",
                "authors": [
                    {
                        "first": "Ruo",
                        "middle": [],
                        "last": "Mo",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [
                            "J"
                        ],
                        "last": "Ping",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [
                            "J"
                        ],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "R"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Readings in Chinese natural language processing",
                "volume": "9",
                "issue": "",
                "pages": "123--146",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mo, Ruo Ping, Y. J. Yang, K. J. Chen, and C. R. Huang, 1996, \"Determinative-Measure Compounds in Mandarin Chinese Formation Rules and Parser Implementation,\" In C. R. Huang, K. J. Chen and B. K. Tsou (Eds.), Readings in Chinese natural language processing, pp. 123-146, Journal of Chinese Monograph Series Number 9.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Extended named entity hierarchy",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Satoshi",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Sudo",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Nobata",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the LREC 2002 Conference",
                "volume": "",
                "issue": "",
                "pages": "1818--1824",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sekine, Satoshi, K. Sudo, and C. Nobata, 2002, \"Extended named entity hierarchy,\" Proceedings of the LREC 2002 Conference, pp. 1818-1824.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Chinese Named Entity Identification Using Class-based Language Model",
                "authors": [
                    {
                        "first": "Jian",
                        "middle": [],
                        "last": "Sun",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "F"
                        ],
                        "last": "Gao",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "N"
                        ],
                        "last": "Huang",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 19th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "967--973",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sun, Jian, J. F. Gao, L. Zhang, M. Zhou, and C. N. Huang, 2002, Chinese Named Entity Identification Using Class-based Language Model,\" Proceedings of the 19th International Conference on Computational Linguistics, Taipei, pp. 967-973",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Chinese Place Automatic Recognition Research",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Tan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hong-Ye",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of Computational Language",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tan. Hong-Ye, 1999, \"Chinese Place Automatic Recognition Research,\" Proceedings of Computational Language, C. N. Huang & Z.D. Dong, ed., Tsinghua Univ. Press, Beijing, China.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "The Application of the Method of Co-Training in Identification of Chinese Organization Names",
                "authors": [
                    {
                        "first": "Xue-Jun",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "B"
                        ],
                        "last": "Zhu",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [
                            "Z"
                        ],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Ye",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "The 2003 National Joint Symposium on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wu, Xue-Jun, J. B. Zhu, H.Z. Wang, and N. Ye, 2003, \"The Application of the Method of Co-Training in Identification of Chinese Organization Names,\" The 2003 National Joint Symposium on Computational Linguistics (JSCL-2003)",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Extracting pronunciation-translated names from Chinese texts using bootstrapping approach",
                "authors": [
                    {
                        "first": "Jing",
                        "middle": [],
                        "last": "Xiao",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "S"
                        ],
                        "last": "Chua",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Nineteenth International Conference on Computational Linguistics (COLING2002)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xiao, Jing, J. M. Liu, and T. S. Chua, 2002, \"Extracting pronunciation-translated names from Chinese texts using bootstrapping approach\", Nineteenth International Conference on Computational Linguistics (COLING2002), Taipei, Taiwan, Aug 2002.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Description of the Kent Ridge Digital Labs System Used for MUC-7",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "H"
                        ],
                        "last": "Shi-Hong",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Bai",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the Seventh Message Understanding Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yu, Shi-Hong, S. H. Bai, and P. Wu, 1998, \"Description of the Kent Ridge Digital Labs System Used for MUC-7,\" Proceedings of the Seventh Message Understanding Conference (MUC-7).",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "A New Statistical Approach to Personal Name Extraction",
                "authors": [
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Zheng",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [
                            "Y"
                        ],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "67--74",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zheng, Chen, W. Y. Liu, and F. Zhang, 2002, \"A New Statistical Approach to Personal Name Extraction,\" ICML 2002, pp. 67-74.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "uris": null,
                "text": "The overview of the candidate generator",
                "num": null
            },
            "FIGREF1": {
                "type_str": "figure",
                "uris": null,
                "text": "\" \"] + (Numerals) + + [Qualifier] + [Unit] This simple rule cannot cover all close-ended NEs, of course. The purpose of this rule is just to prevent unrecognized close-ended NEs affect the performance of the recognition of open-ended ones. In general, the structure of whole open-ended NEs except for abbreviations can be represented as: [prefixes] + genuine name + [suffixes]",
                "num": null
            },
            "FIGREF3": {
                "type_str": "figure",
                "uris": null,
                "text": "TRA|s) = HAvg( (TRA|c k )) where s=c 1 \u2026c n and HAvg( ) returns harmonic means. (LOCL|c1) * (LOCF|c2) when s = c1c2 (LOCL|c1) * (LOCF|c2)* (LOCF|c3) when s=c1c2c3 (LOC|s) = 0 elsewhere (ORGL|c1) * (ORGF|c2) when s = c1c2 (ORGL|c1) * ORGL|c2)* (ORGF|c3) when s=c1c2c3 (ORG|s) = 0 elsewhere",
                "num": null
            },
            "TABREF1": {
                "content": "<table><tr><td>Added</td></tr><tr><td>into</td></tr><tr><td>can-</td></tr><tr><td>didate</td></tr><tr><td>pool</td></tr></table>",
                "type_str": "table",
                "num": null,
                "text": "",
                "html": null
            },
            "TABREF2": {
                "content": "<table><tr><td>prob(c) = counts(typ,c)/(counts(typ,c)+counts(~typ,c)) where counts(typ,c) 2</td></tr><tr><td>1/(counts(~typ,c) + 1) icf(c)</td></tr><tr><td>Further, we assume that counts</td></tr></table>",
                "type_str": "table",
                "num": null,
                "text": "words, c) counts(~typ, c),where counts(typ, c) 2In this case, icf(c) is approximate to prob(c):",
                "html": null
            },
            "TABREF3": {
                "content": "<table><tr><td>Reoccurrence(g) when the length of the input document is less than 400 characters Elsewhere ConSuf(typ|s) estimates the confidence could be brought by the suffix part. Different types of 2 k = 1+400/LEN(Document) suffixes could bring different quantities of confidence. One suffix part might comprise many different suffixes. The summation of each member's confidence is computed: names is defined as follows: (PER|s) =ArgMax { (SUR|s1) * (GIV|s2) } for every substring s1 and s2, where s= s1 s2, \" \" denotes the string concatenation, and: (SUR|c1) when s is constituted of one character Max{GAvg( (SUR|c1), (SUR|c2)), (SUR|c1c2)} when s is constituted of two characters (SUR|s) = 0 when s is longer than two characters (GIV|c1) when s is constituted of one character GAvg( (GIV|c1), (GIV|c2)) when s is constituted of two characters (GIV|s) = 0 when s is longer than two characters ConSuf(typ, s) = Conf(typ 1 , s 1 ) + Conf(typ 2 , s 2 ) + \u2026 + Conf(typ n , s n ) + 1 where s = s 1 s 2 \u2026s (typ|g) of different types of GAvg( ) returns geometric means.</td></tr></table>",
                "type_str": "table",
                "num": null,
                "text": "personal names, location names, organization names, and transliteration names.",
                "html": null
            },
            "TABREF5": {
                "content": "<table><tr><td>Recall = (# of Ext. -# of False)/(# of True)</td></tr><tr><td>Precision = 1 -(# of False)/(# of Ext.)</td></tr></table>",
                "type_str": "table",
                "num": null,
                "text": "1 draws the experimental results of our system. Standard measurements are estimated: Notice that there are two special columns in the table, number of words and excluding rate. Because appearing frequencies of NEs are varied in different domains and have a great impact on precision, precision is thus less meaningful. We consider that excluding rate might be a better measurement of over-generation. Excluding rate is counted from:",
                "html": null
            },
            "TABREF6": {
                "content": "<table/>",
                "type_str": "table",
                "num": null,
                "text": "1. Experimental results of our system",
                "html": null
            },
            "TABREF7": {
                "content": "<table/>",
                "type_str": "table",
                "num": null,
                "text": "2. Recall of our system with different types of NEs",
                "html": null
            }
        }
    }
}