File size: 58,580 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
{
    "paper_id": "I08-1045",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:42:14.473060Z"
    },
    "title": "A Hybrid Feature Set based Maximum Entropy Hindi Named Entity Recognition",
    "authors": [
        {
            "first": "Sujan",
            "middle": [
                "Kumar"
            ],
            "last": "Saha",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Indian Institute of Technology Kharagpur",
                "location": {
                    "postCode": "721302",
                    "region": "West Bengal India"
                }
            },
            "email": "sujan.kr.saha@gmail.com"
        },
        {
            "first": "Sudeshna",
            "middle": [],
            "last": "Sarkar",
            "suffix": "",
            "affiliation": {},
            "email": "shudeshna@gmail.com"
        },
        {
            "first": "Pabitra",
            "middle": [],
            "last": "Mitra",
            "suffix": "",
            "affiliation": {},
            "email": "pabitra@gmail.com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We describe our effort in developing a Named Entity Recognition (NER) system for Hindi using Maximum Entropy (Max-Ent) approach. We developed a NER annotated corpora for the purpose. We have tried to identify the most relevant features for Hindi NER task to enable us to develop an efficient NER from the limited corpora developed. Apart from the orthographic and collocation features, we have experimented on the efficiency of using gazetteer lists as features. We also worked on semi-automatic induction of context patterns and experimented with using these as features of the MaxEnt method. We have evaluated the performance of the system against a blind test set having 4 classes-Person, Organization, Location and Date. Our system achieved a f-value of 81.52%.",
    "pdf_parse": {
        "paper_id": "I08-1045",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We describe our effort in developing a Named Entity Recognition (NER) system for Hindi using Maximum Entropy (Max-Ent) approach. We developed a NER annotated corpora for the purpose. We have tried to identify the most relevant features for Hindi NER task to enable us to develop an efficient NER from the limited corpora developed. Apart from the orthographic and collocation features, we have experimented on the efficiency of using gazetteer lists as features. We also worked on semi-automatic induction of context patterns and experimented with using these as features of the MaxEnt method. We have evaluated the performance of the system against a blind test set having 4 classes-Person, Organization, Location and Date. Our system achieved a f-value of 81.52%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Named Entity Recognition involves locating and classifying the names in text. NER is an important task, having applications in Information Extraction (IE), question answering, machine translation and in most other NLP applications.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "NER systems have been developed for English and few other languages with high accuracies. These systems take advantage of large amount of Named Entity (NE) annotated corpora and other NER resources. However when we started working on a NER system for Hindi, we did not have any NER annotated corpora for Hindi, neither did we have access to any comprehensive gazetteer list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this work we have identified suitable features for the Hindi NER task. Orthography features, the suffix and prefix information, as well as information about the sorrounding words and their tags are used to develop a Maximum Entropy (MaxEnt) based Hindi NER system. Additionally, we have acquired gazetteer lists for Hindi and used these gazetteers in the Maximum Entropy (MaxEnt) based Hindi NER system. We also worked on semi-automatically learning of context pattern for identifying names. These context pattern rules have been integrated into the MaxEnt based NER system, leading to a high accuracy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The paper is organized as follows. A brief survey of different techniques used for the NER task in different languages and domains are presented in Section 2. The MaxEnt based NER system is described in Section 3. Various features used in NER are then discussed. Next we present the experimental results and related discussions. Finally Section 8 concludes the paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A variety of techniques has been used for NER. The two major approaches to NER are: 1. Linguistic approaches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Work",
                "sec_num": "2"
            },
            {
                "text": "The linguistic approaches typically use rules manually written by linguists. There are several rulebased NER systems, containing mainly lexicalized grammar, gazetteer lists, and list of trigger words, which are capable of providing 88%-92% f-measure accuracy for English (Grishman, 1995; McDonald, 1996; Wakao et al., 1996) .",
                "cite_spans": [
                    {
                        "start": 271,
                        "end": 287,
                        "text": "(Grishman, 1995;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 288,
                        "end": 303,
                        "text": "McDonald, 1996;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 304,
                        "end": 323,
                        "text": "Wakao et al., 1996)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Learning based approaches.",
                "sec_num": "2."
            },
            {
                "text": "The main disadvantages of these rule-based techniques are that these require huge experience and grammatical knowledge of the particular language or domain and these systems are not transferable to other languages or domains.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Learning based approaches.",
                "sec_num": "2."
            },
            {
                "text": "Machine Learning (ML) based techniques for NER make use of a large amount of NE annotated training data to acquire high level language knowledge. Several ML techniques have been successfully used for the NER task of which Hidden Markov Model (Bikel et al., 1997 ), Maximum Entropy (Borthwick, 1999 ), Conditional Random Field (Li and Mccallum, 2004 are most common. Combinations of different ML approaches are also used. Srihari et al. (2000) combines Maximum Entropy, Hidden Markov Model and handcrafted rules to build an NER system. NER systems use gazetteer lists for identifying names. Both the linguistic approach (Grishman, 1995; Wakao et al., 1996) and the ML based approach (Borthwick, 1999; Srihari et al., 2000) use gazetteer lists.",
                "cite_spans": [
                    {
                        "start": 236,
                        "end": 261,
                        "text": "Model (Bikel et al., 1997",
                        "ref_id": null
                    },
                    {
                        "start": 262,
                        "end": 297,
                        "text": "), Maximum Entropy (Borthwick, 1999",
                        "ref_id": null
                    },
                    {
                        "start": 298,
                        "end": 348,
                        "text": "), Conditional Random Field (Li and Mccallum, 2004",
                        "ref_id": null
                    },
                    {
                        "start": 421,
                        "end": 442,
                        "text": "Srihari et al. (2000)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 619,
                        "end": 635,
                        "text": "(Grishman, 1995;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 636,
                        "end": 655,
                        "text": "Wakao et al., 1996)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 682,
                        "end": 699,
                        "text": "(Borthwick, 1999;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 700,
                        "end": 721,
                        "text": "Srihari et al., 2000)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Learning based approaches.",
                "sec_num": "2."
            },
            {
                "text": "The linguistic approach uses hand-crafted rules which needs skilled linguistics. Some recent approaches try to learn context patterns through ML which reduce amount of manual labour. Talukder et al.(2006) combined grammatical and statistical techniques to create high precision patterns specific for NE extraction. An approach to lexical pattern learning for Indian languages is described by Ekbal and Bandopadhyay (2007) . They used seed data and annotated corpus to find the patterns for NER.",
                "cite_spans": [
                    {
                        "start": 183,
                        "end": 204,
                        "text": "Talukder et al.(2006)",
                        "ref_id": null
                    },
                    {
                        "start": 392,
                        "end": 421,
                        "text": "Ekbal and Bandopadhyay (2007)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Learning based approaches.",
                "sec_num": "2."
            },
            {
                "text": "The NER task for Hindi has been explored by Cucerzan and Yarowsky in their language independent NER work which used morphological and contextual evidences (Cucerzan and Yarowsky, 1999) . They ran their experiment with 5 languages -Romanian, English, Greek, Turkish and Hindi. Among these the accuracy for Hindi was the worst. For Hindi the system achieved 41.70% f-value with a very low recall of 27.84% and about 85% precision. A more successful Hindi NER system was developed by Wei Li and Andrew Mccallum (2004) using Conditional Random Fields (CRFs) with fea-ture induction. They were able to achieve 71.50% f-value using a training set of size 340k words. In Hindi the maximum accuracy is achieved by (Kumar and Bhattacharyya, 2006) . Their Maximum Entropy Markov Model (MEMM) based model gives 79.7% f-value.",
                "cite_spans": [
                    {
                        "start": 155,
                        "end": 184,
                        "text": "(Cucerzan and Yarowsky, 1999)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 481,
                        "end": 514,
                        "text": "Wei Li and Andrew Mccallum (2004)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 706,
                        "end": 737,
                        "text": "(Kumar and Bhattacharyya, 2006)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Learning based approaches.",
                "sec_num": "2."
            },
            {
                "text": "We have used a Maximum Entropy model to build the NER in Hindi. MaxEnt is a flexible statistical model which assigns an outcome for each token based on its history and features. MaxEnt computes the probability p(o|h) for any o from the space of all possible outcomes O, and for every h from the space of all possible histories H. A history is all the conditioning data that enables one to assign probabilities to the space of outcomes. In NER, history can be viewed as all information derivable from the training corpus relative to the current token. The computation of p(o|h) in MaxEnt depends on a set of features, which are helpful in making predictions about the outcome. The features may be binary-valued or multi-valued. For instance, one of our features is: the current token is a part of the surname list; how likely is it to be part of a person name. Formally, we can represent this feature as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "f (h, o) =",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "1 if w i in surname list and o = person 0 otherwise (1) Given a set of features and a training corpus, the MaxEnt estimation process produces a model in which every feature f i has a weight \u03b1 i . We can compute the conditional probability as (Pietra et al., 1997) :",
                "cite_spans": [
                    {
                        "start": 242,
                        "end": 263,
                        "text": "(Pietra et al., 1997)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p(o|h) = 1 Z(h) i \u03b1 i f i (h,o) (2) Z(h) = o i \u03b1 i f i (h,o)",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "So the conditional probability of the outcome is the product of the weights of all active features, normalized over the products of all the features. For our development we have used a Java based opennlp MaxEnt toolkit 1 to get the probability values of a word belonging to each class. That is, given a sequence of words, the probability of each class is obtained for each word. To find the most probable tag corresponding to each word of a sequence, we can choose the tag having the highest class conditional probability value. But this method is not good as it might result in an inadmissible output tag.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "Some tag sequences should never happen. To eliminate these inadmissible sequences we have made some restrictions. Then we used a beam search algorithm with a beam of length 3 with these restrictions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "The training data for this task is composed of about 243K words which is collected from the popular daily Hindi newspaper \"Dainik Jagaran\". This corpus has been manually annotated and has about 16,482 NEs. In this development we have considered 4 types of NEs, these are P erson(P), Location(L), Organization(O) and Date(D). To recognize entity boundaries each name class N is subdivided into 4 sub-classes, i.e., N Begin, N Continue, N End, and N U nique. Hence, there are a total of 17 classes including 1 class for not-name. The corpus contains 6, 298 Person, 4, 696 Location, 3, 652 Organization and 1, 845 Date entities.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum Entropy Based Model",
                "sec_num": "3"
            },
            {
                "text": "Machine learning approaches like MaxEnt, CRF etc. make use of different features for identifying the NEs. Orthographic features (like capitalization, decimal, digits), affixes, left and right context (like previous and next words), NE specific trigger words, gazetteer features, POS and morphological features etc. are generally used for NER. In English and some other languages, capitalization features play an important role as NEs are generally capitalized for these languages. Unfortunately this feature is not applicable for Hindi. Also Indian person names are more diverse, lots of common words having other meanings are also used as person names. These make difficult to develop a NER system on Hindi. Li and Mccallum (2004) used the entire word text, character n-grams (n = 2, 3, 4), word prefix and suffix of lengths 2, 3 and 4, and 24 Hindi gazetteer lists as atomic features in their Hindi NER. Kumar and Bhattacharyya (2006) used word features (suffixes, digits, special characters), context features, dictionary features, NE list features etc. in their MEMM based Hindi NER system. In the following we have discussed about the features we have identified and used to develop the Hindi NER system.",
                "cite_spans": [
                    {
                        "start": 709,
                        "end": 731,
                        "text": "Li and Mccallum (2004)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 906,
                        "end": 936,
                        "text": "Kumar and Bhattacharyya (2006)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Features for Hindi NER",
                "sec_num": "4"
            },
            {
                "text": "The features which we have identified for Hindi Named Entity Recognition are:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Static Word Feature: The previous and next words of a particular word are used as features. The previous m words (w i\u2212m ...w i\u22121 ) to next n words (w i+1 ...w i+n ) can be treated. During our experiment different combinations of previous 4 to next 4 words are used.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Context Lists: Context words are defined as the frequent words present in a word window for a particular class. We compiled a list of the most frequent words that occur within a window of w i\u22123 ...w i+3 of every NE class. For example, location context list contains the words like 'jAkara 2 ' (going to), 'desha' (country), 'rAjadhAnI' (capital) etc. and person context list contains 'kahA' (say), 'prdhAnama.ntrI' (prime minister) etc. For a given word, the value of this feature corresponding to a given NE type is set to 1 if the window w i\u22123 ...w i+3 around the w i contains at last one word from this list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Dynamic NE tag: Named Entity tags of the previous words (t i\u2212m ...t i\u22121 ) are used as features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "First Word: If the token is the first word of a sentence, then this feature is set to 1. Otherwise, it is set to 0.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Contains Digit: If a token 'w' contains digit(s) then the feature ContainsDigit is set to 1. This feature is helpful for identifying company product names (e.g. 06WD1992), house number (e.g. C226) etc.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Numerical Word: For a token 'w' if the word is a numerical word i.e. a word denoting a number (e.g. eka (one), do (two), tina (three) etc.) then the feature N umW ord is set to 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Word Suffix: Word suffix information is helpful to identify the named NEs. Two types of suffix features have been used. Firstly a fixed length word suffix of the current and surrounding words are used as features. Secondly we compiled lists of common suffixes of person and place names in Hindi. For example, 'pura', 'bAda', 'nagara' etc. are location suffixes. We used two binary features corresponding to the lists -whether a given word has a suffix from the list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Word Prefix: Prefix information of a word may be also helpful in identifying whether it is a NE. A fixed length word prefix of current and surrounding words are treated as a features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Parts-of-Speech (POS) Information: The POS of the current word and the surrounding words may be useful feature for NER. We have access to a Hindi POS pagger developed at IIT Kharagpur which has an accuracy about 90%. The tagset of the tagger contains 28 tags. We have used the POS values of the current and surrounding tokens as features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "We realized that the detailed POS tagging is not very relevant. Since NEs are noun phrases, the noun tag is very relevant. Further the postposition following a name may give a clue to the NE type. So we decided to use a coarse-grained tagset with only three tags -nominal (Nom), postposition (PSP) and other (O).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "The POS information is also used by defining several binary features. An example is the N omP SP binary feature. The value of this feature is defined to be 1 if the current token is nominal and the next token is a PSP.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature Description",
                "sec_num": "4.1"
            },
            {
                "text": "Lists of names of various types are helpful in name identification. We have compiled some specialized name lists from different web sources. But the names in these lists are in English, not in Hindi. So we have transliterated these English name lists to make them useful for our Hindi NER task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Enhancement using Gazetteer Feature",
                "sec_num": "5"
            },
            {
                "text": "For the transliteration we have build a 2-phase transliteration module. We have defined an intermediate alphabet containing 34 characters. English names are transliterated to this intermediate form using a map-table. Hindi strings are also transliterated to the intermediate alphabet form using a different map-table. For a English-Hindi string pair, if transliterations of the both strings are same, then we conclude that one string is the transliteration of the other. This transliteration module works with 91.59% accuracy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Enhancement using Gazetteer Feature",
                "sec_num": "5"
            },
            {
                "text": "Using the transliteration approach we have constructed 8 lists. Which are, month name and days of the week (40) 3 , organization end words list (92), person prefix words list (123), list of common locations (80), location names list (17,600), first names list (9722), middle names list (35), surnames list (1800).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Enhancement using Gazetteer Feature",
                "sec_num": "5"
            },
            {
                "text": "The lists can be used in name identification in various ways. One way is to check whether a token is in any list. But this approach is not good as it has some limitations. Some words may present in two or more gazetteer lists. For example, 'bangAlora' is in surnames list and also in location names list. Confusions arise to make decisions for these words. Some words are in gazetteer lists but sometimes these are used in text as not-name entity. For example, 'gayA' is in location list but sometimes the word is used as verb in text and makes confusion. These limitations might be reduced if the contexts are considered.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Enhancement using Gazetteer Feature",
                "sec_num": "5"
            },
            {
                "text": "We have used these gazetteer lists as features of MaxEnt. We have prepared several binary features which are defined as whether a given word is in a particular list. For example, a binary feature F irstN ame is 1 for a particular token 't' if 't' is in the first name list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Enhancement using Gazetteer Feature",
                "sec_num": "5"
            },
            {
                "text": "Context patterns are helpful for identifying NEs. As manual identification of context patterns takes much manual labour and linguistic knowledge, we have developed a module for semi-automatically learning of context pattern. The summary of the context pattern learning module is given follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "1. Collect some seed entities (E) for each class.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "2. For each seed entity e in E, from the corpus find context string(C) comprised of n tokens before e, a placeholder for the class instance and n tokens after e. [We have used n = 3] This set of tokens form initial pattern.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "3. Search the pattern in the corpus and find the coverage and precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "4. Discard the patterns having low precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "5. Generalize the patterns by dropping one or more tokens to increase coverage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "6. Find best patterns having good precision and coverage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "The quality of a pattern is measured by precision and coverage. Precision is the ratio of correct identification and the total identification, when the particular pattern is used to identify of NEs of a specific type from a raw text. Coverage is the amount of total identification. We have given more importance to precision and we have marked a pattern as ef f ective if the precision is more than 95%. The method is applied on an un-annotated text having 4887011 words collected from \"Dainik Jagaran\" and context patterns are learned. These context patterns are used as features of MaxEnt in the Hindi NER system. Some example patterns are: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Pattern based Features",
                "sec_num": "6"
            },
            {
                "text": "We have evaluated the system using a blind test corpus of 25K words, which is distinct from the training corpus. The accuracies are measured in terms of the f-measure, which is the weighted harmonic mean of precision and recall. Here we can mention that we have evaluated the performance of the system on actual NEs. That means the system annotates the test data using 17 tags, similar to the training data. During evaluation we have merged the sub-tags of a particular entity to get a complete NEs and calculated the accuracies. At the end of section 7.1 we have also mentioned the accuracies if evaluated on the tags. A number of experiments are conducted considering various combinations of features to identify the best feature set for the Hindi NER task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "7"
            },
            {
                "text": "The baseline performance of the system without using gazetteer and context patterns are presented in Table 1 . They are summarized below.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 101,
                        "end": 108,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "7.1"
            },
            {
                "text": "While experimenting with static word features, we have observed that a window of previous two Similarly we have experimented with suffixes of different lengths and observed that the suffixes of length \u2264 2 gives the best result for the Hindi NER task. In using POS information, we have observed that the coarse-grained POS tagger information is more effective than the finer-grained POS values. A feature set, combining finer-grained POS values, surrounding words and previous NE tag, gives a f-value of 70.39%. But when the coarse-grained POS values are used instead of the finer-grained POS values, the f-value is increased to 74.16%. The most interesting fact we have observed that more complex features do not guarantee to achieve better results. For example, a feature set combined with current and surrounding words, previous NE tag and fixed length suffix information, gives a f-value 73.42%. But when prefix information are added the f-value decreased to 72.5%. The highest accuracy achieved by the system is 75.6% f-value without using gazetteer information and context patterns.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "7.1"
            },
            {
                "text": "The results in Table 1 are obtained by evaluating on the actual NEs. But when the system is evaluated on the tags the f-value increases. For f6, the accuracy achieved on actual NEs is 75.6%, but if evaluated on tags, the value increased to 77.36%. Similarly, for f2, the accuracy increased to 75.91% if evaluated on tags. The reason is the NEs containing 3 or more words, are subdivided to N-begin, Ncontinue (1 or more) and N-end. So if there is an error in any of the subtags, the total NE becomes an error. We observed many cases where NEs are partially identified by the system, but these are considered as error during evaluation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 15,
                        "end": 22,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "7.1"
            },
            {
                "text": "Next we add gazetteer and context patterns as features in our MaxEnt based NER system. In Table 2 we have compared the results after addition of gazetteer information and context patterns with previous results. While experimenting we have observed that gazetteer lists and context patterns are capable of increasing the performance of our baseline system. That is tested on all the baseline feature sets. In Table 2 the comparison is shown for only two features -f2 and f6 which are defined in Table 1 . It may be observed that the relative advantage of using both gazetteer and context patterns together over using them individually is not much. For example, when gazetteer information are added with f2, the fvalue is increased by 6.38%, when context patterns are added the f-value is increased by 6.64%., but when both are added the increment is 7.27%. This may be due to the fact that both gazetteer and context patterns lead to the same identifications. Using the comprehensive feature set (using gazetteer information and context patterns) the MaxEnt based NER system achieves the maximum f-value of 81.52%. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 90,
                        "end": 97,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 408,
                        "end": 415,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 494,
                        "end": 501,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Using Gazetteer Lists and Context Patterns",
                "sec_num": "7.2"
            },
            {
                "text": "We have shown that our MaxEnt based NER system is able to achieve a f-value of 81.52%, using a hybrid set of features including traditional NER features augmented with gazetteer lists and extracted context patterns. The system outperforms the existing NER systems in Hindi.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "8"
            },
            {
                "text": "Feature selection and feature clustering might lead to further improvement of performance and is under investigation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "8"
            },
            {
                "text": "www.maxent.sourceforge.net.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "All Hindi words are written in italics using the 'Itrans' transliteration.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The italics integers in brackets indicate the size of the lists.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The work is partially funded by Microsoft Research India.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgement",
                "sec_num": "9"
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Nymble: A High Performance Learning Name-finder",
                "authors": [
                    {
                        "first": "Bikel",
                        "middle": [],
                        "last": "Daniel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Miller",
                        "middle": [],
                        "last": "Scott",
                        "suffix": ""
                    },
                    {
                        "first": "Schwartz",
                        "middle": [],
                        "last": "Richard",
                        "suffix": ""
                    },
                    {
                        "first": "Weischedel",
                        "middle": [],
                        "last": "Ralph",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "194--201",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bikel Daniel M., Miller Scott, Schwartz Richard and Weischedel Ralph. 1997. Nymble: A High Perfor- mance Learning Name-finder. In Proceedings of the Fifth Conference on Applied Natural Language Pro- cessing, pages 194-201.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A Maximum Entropy Approach to Named Entity Recognition",
                "authors": [
                    {
                        "first": "Borthwick",
                        "middle": [],
                        "last": "Andrew",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Borthwick Andrew. 1999. A Maximum Entropy Ap- proach to Named Entity Recognition. Ph.D. thesis, Computer Science Department, New York University.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Language Independent Named Entity Recognition Combining Morphological and Contextual Evidence",
                "authors": [
                    {
                        "first": "Cucerzan",
                        "middle": [],
                        "last": "Silviu",
                        "suffix": ""
                    },
                    {
                        "first": "Yarowsky",
                        "middle": [],
                        "last": "David",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the Joint SIGDAT Conference on EMNLP and VLC 1999",
                "volume": "",
                "issue": "",
                "pages": "90--99",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cucerzan Silviu and Yarowsky David. 1999. Language Independent Named Entity Recognition Combining Morphological and Contextual Evidence. In Proceed- ings of the Joint SIGDAT Conference on EMNLP and VLC 1999, pages 90-99.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Lexical Pattern Learning from Corpus Data for Named Entity Recognition",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ekbal",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Bandyopadhyay",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of International Conference on Natural Language Processing (ICON)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ekbal A. and Bandyopadhyay S. 2007. Lexical Pattern Learning from Corpus Data for Named Entity Recog- nition. In Proceedings of International Conference on Natural Language Processing (ICON), 2007.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The New York University System MUC-6 or Where's the syntax?",
                "authors": [
                    {
                        "first": "Grishman",
                        "middle": [],
                        "last": "Ralph",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the Sixth Message Understanding Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grishman Ralph. 1995. The New York University Sys- tem MUC-6 or Where's the syntax? In Proceedings of the Sixth Message Understanding Conference.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Named Entity Recognition in Hindi using MEMM",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Kumar",
                        "suffix": ""
                    },
                    {
                        "first": "Bhattacharyya",
                        "middle": [],
                        "last": "Pushpak",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Technical Report, IIT",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kumar N. and Bhattacharyya Pushpak. 2006. Named Entity Recognition in Hindi using MEMM. In Techni- cal Report, IIT Bombay, India..",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Rapid Development of Hindi Named Entity Recognition using Conditional Random Fields and Feature Induction (Short Paper)",
                "authors": [
                    {
                        "first": "Li",
                        "middle": [],
                        "last": "Wei",
                        "suffix": ""
                    },
                    {
                        "first": "Mccallum",
                        "middle": [],
                        "last": "Andrew",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "ACM Transactions on Computational Logic",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Li Wei and McCallum Andrew. 2004. Rapid Develop- ment of Hindi Named Entity Recognition using Con- ditional Random Fields and Feature Induction (Short Paper). ACM Transactions on Computational Logic.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Internal and external evidence in the identification and semantic categorization of proper names",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Mcdonald",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Corpus Processing for Lexical Acquisition",
                "volume": "",
                "issue": "",
                "pages": "21--39",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "McDonald D. 1996. Internal and external evidence in the identification and semantic categorization of proper names. In B. Boguraev and J. Pustejovsky, editors, Corpus Processing for Lexical Acquisition, pages 21- 39.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Inducing features of random fields",
                "authors": [
                    {
                        "first": "Pietra",
                        "middle": [
                            "Stephen"
                        ],
                        "last": "Della",
                        "suffix": ""
                    },
                    {
                        "first": "Pietra",
                        "middle": [
                            "Vincent"
                        ],
                        "last": "Della",
                        "suffix": ""
                    },
                    {
                        "first": "Lafferty",
                        "middle": [],
                        "last": "John",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
                "volume": "19",
                "issue": "4",
                "pages": "380--393",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pietra Stephen Della, Pietra Vincent Della and Lafferty John. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 19(4):380-393.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A Hybrid Approach for Named Entity and Sub-Type Tagging",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Srihari",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Niu",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the sixth conference on Applied natural language processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Srihari R., Niu C. and Li W. 2000. A Hybrid Approach for Named Entity and Sub-Type Tagging. In Proceed- ings of the sixth conference on Applied natural lan- guage processing.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A context pattern induction method for named entity extraction",
                "authors": [
                    {
                        "first": "Talukdar",
                        "middle": [],
                        "last": "Pratim",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Liberman",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Talukdar Pratim P., Brants T., Liberman M., and Pereira F. 2006. A context pattern induction method for named entity extraction. In Proceedings of the Tenth Conference on Computational Natural Lan- guage Learning (CoNLL-X).",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Evaluation of an algorithm for the recognition and classification of proper names",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Wakao",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Gaizauskas",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Wilks",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of COLING-96",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wakao T., Gaizauskas R. and Wilks Y. 1996. Evaluation of an algorithm for the recognition and classification of proper names. In Proceedings of COLING-96.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF1": {
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>Feature</td><td>Class</td><td>F-value</td></tr><tr><td/><td>PER</td><td>63.33</td></tr><tr><td/><td>LOC</td><td>69.56</td></tr><tr><td>f1 = Word, NE Tag</td><td>ORG</td><td>58.58</td></tr><tr><td/><td>DAT</td><td>91.76</td></tr><tr><td/><td colspan=\"2\">TOTAL 69.64</td></tr><tr><td/><td>PER</td><td>69.75</td></tr><tr><td/><td>LOC</td><td>75.8</td></tr><tr><td>f2 = Word, NE Tag,</td><td>ORG</td><td>59.31</td></tr><tr><td>Suffix (\u2264 2)</td><td>DAT</td><td>89.09</td></tr><tr><td/><td colspan=\"2\">TOTAL 73.42</td></tr><tr><td/><td>PER</td><td>70.61</td></tr><tr><td/><td>LOC</td><td>71</td></tr><tr><td>f3 = Word, NE Tag,</td><td>ORG</td><td>59.31</td></tr><tr><td>Suffix (\u2264 2), Prefix</td><td>DAT</td><td>89.09</td></tr><tr><td/><td colspan=\"2\">TOTAL 72.5</td></tr><tr><td/><td>PER</td><td>70.61</td></tr><tr><td/><td>LOC</td><td>75.8</td></tr><tr><td>f4 = Word, NE Tag,</td><td>ORG</td><td>60.54</td></tr><tr><td>Digit, Suffix (\u2264 2)</td><td>DAT</td><td>93.8</td></tr><tr><td/><td colspan=\"2\">TOTAL 74.26</td></tr><tr><td/><td>PER</td><td>64.25</td></tr><tr><td/><td>LOC</td><td>71</td></tr><tr><td>f5 = Word, NE Tag, POS</td><td>ORG</td><td>60.54</td></tr><tr><td/><td>DAT</td><td>89.09</td></tr><tr><td/><td colspan=\"2\">TOTAL 70.39</td></tr><tr><td/><td>PER</td><td>72.26</td></tr><tr><td>f6 = Word, NE Tag,</td><td>LOC</td><td>78.6</td></tr><tr><td>Suffix (\u2264 2), Digit,</td><td>ORG</td><td>51.36</td></tr><tr><td>N omP SP</td><td>DAT</td><td>92.82</td></tr><tr><td/><td colspan=\"2\">TOTAL 75.6</td></tr></table>",
                "html": null,
                "text": "Table 1: F-values for different features words to next two words (W i\u22122 ...W i+2 ) gives best results. But when several other features are combined then single word window (W i\u22121 ...W i+1 ) performs better."
            },
            "TABREF3": {
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "text": "F-values for different features with gazetteers and context patterns"
            }
        }
    }
}