File size: 82,565 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
{
    "paper_id": "I11-1044",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:30:42.212479Z"
    },
    "title": "Extracting Relation Descriptors with Conditional Random Fields",
    "authors": [
        {
            "first": "Yaliang",
            "middle": [],
            "last": "Li",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Singapore Management University",
                "location": {
                    "country": "Singapore"
                }
            },
            "email": "ylli@smu.edu.sg"
        },
        {
            "first": "Jing",
            "middle": [],
            "last": "Jiang",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Singapore Management University",
                "location": {
                    "country": "Singapore"
                }
            },
            "email": "jingjiang@smu.edu.sg"
        },
        {
            "first": "Hai",
            "middle": [],
            "last": "Leong Chieu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "DSO National Laboratories",
                "location": {
                    "country": "Singapore"
                }
            },
            "email": ""
        },
        {
            "first": "Ming",
            "middle": [
                "A"
            ],
            "last": "Kian",
            "suffix": "",
            "affiliation": {},
            "email": "ckianmin@dso.org.sg"
        },
        {
            "first": "",
            "middle": [],
            "last": "Chai",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "DSO National Laboratories",
                "location": {
                    "country": "Singapore"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper we study a novel relation extraction problem where a general relation type is defined but relation extraction involves extracting specific relation descriptors from text. This new task can be treated as a sequence labeling problem. Although linear-chain conditional random fields (CRFs) can be used to solve this problem, we modify this baseline solution in order to better fit our task. We propose two modifications to linear-chain CRFs, namely, reducing the space of possible label sequences and introducing long-range features. Both modifications are based on some special properties of our task. Using two data sets we have annotated, we evaluate our methods and find that both modifications to linear-chain CRFs can significantly improve the performance for our task.",
    "pdf_parse": {
        "paper_id": "I11-1044",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper we study a novel relation extraction problem where a general relation type is defined but relation extraction involves extracting specific relation descriptors from text. This new task can be treated as a sequence labeling problem. Although linear-chain conditional random fields (CRFs) can be used to solve this problem, we modify this baseline solution in order to better fit our task. We propose two modifications to linear-chain CRFs, namely, reducing the space of possible label sequences and introducing long-range features. Both modifications are based on some special properties of our task. Using two data sets we have annotated, we evaluate our methods and find that both modifications to linear-chain CRFs can significantly improve the performance for our task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Relation extraction is the task of identifying and characterizing the semantic relations between entities in text. Depending on the application and the resources available, relation extraction has been studied in a number of different settings. When relation types are well defined and labeled relation mention instances are available, supervised learning is usually applied (Zelenko et al., 2003; Zhou et al., 2005; Bunescu and Mooney, 2005; Zhang et al., 2006) . When relation types are known but little training data is available, bootstrapping has been used to iteratively expand the set of seed examples and relation patterns (Agichtein and Gravano, 2000) . When no relation type is pre-defined but there is a focused corpus of interest, unsupervised relation discovery tries to cluster entity pairs in order to identify interesting relation types (Hasegawa et al., 2004; Rosenfeld and Feldman, 2006; Shinyama and Sekine, 2006) . More recently, open relation extraction has also been proposed where there is no fixed domain or predefined relation type, and the goal is to identify all possible relations from an open-domain corpus (Banko and Etzioni, 2008; Wu and Weld, 2010; Hoffmann et al., 2010) .",
                "cite_spans": [
                    {
                        "start": 375,
                        "end": 397,
                        "text": "(Zelenko et al., 2003;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 398,
                        "end": 416,
                        "text": "Zhou et al., 2005;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 417,
                        "end": 442,
                        "text": "Bunescu and Mooney, 2005;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 443,
                        "end": 462,
                        "text": "Zhang et al., 2006)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 631,
                        "end": 660,
                        "text": "(Agichtein and Gravano, 2000)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 853,
                        "end": 876,
                        "text": "(Hasegawa et al., 2004;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 877,
                        "end": 905,
                        "text": "Rosenfeld and Feldman, 2006;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 906,
                        "end": 932,
                        "text": "Shinyama and Sekine, 2006)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1136,
                        "end": 1161,
                        "text": "(Banko and Etzioni, 2008;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 1162,
                        "end": 1180,
                        "text": "Wu and Weld, 2010;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 1181,
                        "end": 1203,
                        "text": "Hoffmann et al., 2010)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "These different relation extraction settings suit different applications. In this paper, we focus on another setting where the relation types are defined at a general level but a more specific relation description is desired. For example, in the widely used ACE 1 data sets, relation types are defined at a fairly coarse granularity. Take for instance the \"employment\" relation, which is a major relation type defined in ACE. In ACE evaluation, extraction of this relation only involves deciding whether a person entity is employed by an organization entity. In practice, however, we often also want to find the exact job title or position this person holds at the organization if this information is mentioned in the text. Table 1 gives some examples. We refer to the segment of text that describes the specific relation between the two related entities (i.e., the two arguments) as the relation descriptor. This paper studies how to extract such relation descriptors given two arguments. One may approach this task as a sequence labeling problem and apply methods such as the linearchain conditional random fields (CRFs) (Lafferty et al., 2001 ). However, this solution ignores a useful property of the task: the space of possible label sequences is much smaller than that enumerated by a linear-chain CRF. There are two implications. First, the normalization constant in the linear-chain CRF is too large because it also enumerates the impossible sequences. Second, the restriction to the correct space of label sequence per- ",
                "cite_spans": [
                    {
                        "start": 1123,
                        "end": 1145,
                        "text": "(Lafferty et al., 2001",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 724,
                        "end": 731,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A ARG-2 spokesman , ARG-1 , said the company now ... spokesman At ARG-2 , by contrast , ARG-1 said customers spend on ... Nil",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "vice president (PER, ORG)",
                "sec_num": null
            },
            {
                "text": "Personal/Social ARG-1 had an elder brother named ARG-2 . an elder brother (PER, PER)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "vice president (PER, ORG)",
                "sec_num": null
            },
            {
                "text": "ARG-1 was born at ... , as the son of ARG-2 of Sweden ... the son ARG-1 later married ARG-2 in 1973 , ... married Through his contact with ARG-1 , ARG-2 joined the Greek Orthodox Church . Nil Table 1 : Some examples of candidate relation instances and their relation descriptors.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 192,
                        "end": 199,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "vice president (PER, ORG)",
                "sec_num": null
            },
            {
                "text": "mits the use of long-range features without an exponential increase in computational cost. We compare the performance of the baseline linear-chain CRF model and our special CRF model on two data sets that we have manually annotated. Our experimental results show that both reducing the label sequence space and introducing long-range features can significantly improve the baseline performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "vice president (PER, ORG)",
                "sec_num": null
            },
            {
                "text": "The rest of the paper is organized as follows. In Section 2 we review related work. We then formally define our task in Section 3. In Section 4 we present a baseline linear-chain CRF-based solution and our modifications to the baseline method. We discuss the annotation of our data sets and show our experimental results in Section 5. We conclude in Section 6.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "vice president (PER, ORG)",
                "sec_num": null
            },
            {
                "text": "Most existing work on relation extraction studies binary relations between two entities. For supervised relation extraction, existing work often uses the ACE benchmark data sets for evaluation (Bunescu and Mooney, 2005; Zhou et al., 2005; Zhang et al., 2006) . In this setting, a set of relation types are defined and the task is to identify pairs of entities that are related and to classify their relations into one of the pre-defined relation types. It is assumed that the relation type itself is sufficient to characterize the relation between the two related entities. However, based on our observation, some of the relation types defined in ACE such as the \"employment\" relation and the \"personal/social\" relation are very general and can be further characterized by more specific descriptions.",
                "cite_spans": [
                    {
                        "start": 193,
                        "end": 219,
                        "text": "(Bunescu and Mooney, 2005;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 220,
                        "end": 238,
                        "text": "Zhou et al., 2005;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 239,
                        "end": 258,
                        "text": "Zhang et al., 2006)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Recently open relation extraction has been proposed for open-domain information extraction (Banko and Etzioni, 2008) . Since there are no fixed relation types, open relation extraction aims at extracting all possible relations between pairs of entities. The extracted results are (ARG-1, REL, ARG-2) tuples. The TextRunner system based on (Banko and Etzioni, 2008) extracts a diverse set of relations from a huge Web corpus. These extracted predicate-argument tuples are presumably the most useful to support Web search scenarios where the user is looking for specific relations. However, because of the diversity of the extracted relations and the domain independence, open relation extraction is probably not suitable for populating relational databases or knowledgebases. In contrast, the task of extracting relation descriptors as we have proposed still assumes a pre-defined general relation type, which ensures that the extracted tuples follow the same relation definition and thus can be used in applications such as populating relational databases.",
                "cite_spans": [
                    {
                        "start": 91,
                        "end": 116,
                        "text": "(Banko and Etzioni, 2008)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 339,
                        "end": 364,
                        "text": "(Banko and Etzioni, 2008)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "In terms of models and techniques, we use standard linear-chain CRF as our baseline, which is the main method used in (Banko and Etzioni, 2008) as well as for many other information extraction problems. The major modifications we propose for our task are the reduction of the label sequence space and the incorporation of long-range features. We note that these modifications are closely related to the semi-Markov CRF models proposed by Sarawagi and Cohen (2005) . In fact, the modified CRF model for our task can be considered as a special case of semi-Markov CRF where we only consider label sequences that contain at most one relation descriptor sequence.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 143,
                        "text": "(Banko and Etzioni, 2008)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 438,
                        "end": 463,
                        "text": "Sarawagi and Cohen (2005)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "In this section we define the task of extracting relation descriptors for a given pre-defined class of relations such as \"employment.\" Given two named entities occurring in the same sentence, one acting as ARG-1 and the other as ARG-2, we aim to extract a segment of text from the sentence that best describes a pre-defined general relation between the two entities. Formally, let (w 1 , w 2 , . . . , w n ) denote the sequence of tokens in a sentence, where w p is ARG-1 and w q is ARG-2 (1 \u2264 p, q \u2264 n, p = q). Our goal is to locate a subsequence (w r , . . . , w s ) (1 \u2264 r \u2264 s \u2264 n) that best describes the relation between ARG-1 and ARG-2. If ARG-1 and ARG-2 are not related through the pre-defined general relation, Nil should be returned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "3"
            },
            {
                "text": "The above definition constrains ARG-1 and ARG-2 to single tokens. In our experiments, we will replace the original lexical strings of ARG-1 and ARG-2 with the generic tokens ARG1 and ARG2. Examples of sentences with the named entities replaced with argument tokens are shown in the second column of Table 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 299,
                        "end": 306,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "3"
            },
            {
                "text": "The relation descriptor extraction task can be treated as a sequence labeling problem. Let x = (x 1 , x 2 , . . . , x n ) denote the sequence of observations in a relation instance, where x i is w i augmented with additional information such as the POS tag of w i , and the phrase boundary information. Each observation x i is associated with a label y i \u2208 Y which indicates whether w i is part of the relation descriptor. Following the commonly used BIO notation (Ramshaw and Marcus, 1995) in sequence labeling, we define Y = {B-REL, I-REL, O}. Let y = (y 1 , y 2 , . . . , y n ) denote the sequence of labels for x. Our task can be reduced to finding the best label sequence\u0177 among all the possible label sequences for x.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method 4.1 Representation",
                "sec_num": "4"
            },
            {
                "text": "For sequence labeling tasks in NLP, linear-chain CRFs have been rather successful. It is an undirected graphical model in which the conditional probability of a label sequence y given the observation sequence x is",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p(y|x, \u039b) = exp \" P i P k \u03bb k f k (yi\u22121, yi, x) \" Z(x, \u039b) ,",
                        "eq_num": "(1)"
                    }
                ],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "where \u039b = {\u03bb k } is the set of model parameters, f k is an arbitrary feature function defined over two consecutive labels and the whole observation sequence, and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Z(x, \u039b) = X y exp \" X i X k \u03bb k f k (y i\u22121 , y i , x) \"",
                        "eq_num": "(2)"
                    }
                ],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "is the normalization constant. Given a set of training instances {x j , y * j } where y * j is the correct label sequence for x j , we can learn the best model parameters\u039b as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u039b = arg min \u039b \u2212 X j log p(y * j |xj, \u039b) + \u03b2 X k \u03bb 2 k ! .",
                        "eq_num": "(3)"
                    }
                ],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "Here \u03b2 k \u03bb 2 k is a regularization term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Linear-Chain CRF Solution",
                "sec_num": "4.2"
            },
            {
                "text": "We note that while we can directly apply linearchain CRFs to extract relation descriptors, there are some special properties of our task that allow us to modify standard linear-chain CRFs to better suit our needs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improvement over Linear-Chain CRFs",
                "sec_num": "4.3"
            },
            {
                "text": "In linear-chain CRFs, the normalization constant Z considers all possible label sequences y. For the relation descriptor extraction problem, however, we expect that there is either a single relation descriptor sequence or no such sequence. In other words, for a given relation instance, we only expect two kinds of label sequences: (1) All y i are O, and (2) exactly one y i is B-REL followed by zero or more consecutive I-REL while all other y i are O. Therefore the space of label sequences should be reduced to only those that satisfy the above constraint.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "One way to exploit this constraint within linearchain CRFs is to enforce it only during testing. We can pick the label sequence that has the highest probability in the valid label sequence space instead of the entire label sequence space. For a candidate relation instance x, let\u1ef8 denote the set of valid label sequences, i.e., those that have either one or no relation descriptor sequence. We then choose the best sequence\u0177 as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "y = arg max y\u2208\u1ef8 p(y|x,\u039b).",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "Arguably, the more principled way to exploit the constraint is to modify the probabilistic model itself. So at the training stage, we should also consider only\u1ef8 by defining the normalization termZ as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Z(x, \u039b) = X y \u2208\u1ef8 exp \" X i X k \u03bb k f k (y i\u22121 , y i , x) \" .",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "The difference between Equation 5and Equation (2) is the set of label sequences considered. In other words, while in linear-chain CRFs the correct label sequence competes with all possible label sequences for probability mass, for our task the correct label sequence should compete with only other valid label sequences. In Section 5 we will compare these two different normalization terms and show the advantage of using Equation 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label sequence constraint",
                "sec_num": null
            },
            {
                "text": "In linear-chain CRF models, only first-order label dependencies are considered because features are defined over two consecutive labels. Inference in linear-chain CRFs can be done efficiently using dynamic programming. More general higherorder CRF models also exist, allowing long-range features defined over more than two consecutive labels. But the computational cost of higher-order CRFs also increases exponentially with the order of dependency.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adding long-range features",
                "sec_num": null
            },
            {
                "text": "For our task, because of the constraint on the space of label sequences, we can afford to use long-range features. In our case, inference is still efficient because the number of sequences to be enumerated has been drastically reduced due to the constraint. Let g(y, x) denote a feature function defined over the entire label sequence y and the observation sequence x. We can include such feature functions in our model as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adding long-range features",
                "sec_num": null
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p(y|x, \u0398) = 1 Z(x,\u0398) \" exp \" P i P k \u03bb k f k (yi\u22121, yi, x) + P l \u00b5 l g l (y, x) \" # ,",
                        "eq_num": "(6)"
                    }
                ],
                "section": "Adding long-range features",
                "sec_num": null
            },
            {
                "text": "where \u0398 = {{\u03bb k }, {\u00b5 l }} is the set of all model parameters. Both {\u03bb k } and {\u00b5 l } are regularized as in Equation 3. Note that although each f (y i\u22121 , y i , x) may be subsumed under a g(y, x), here we group all features that can be captured by linear-chain CRFs under f and other real longrange features under g. In Section 5 we will see that with the additional feature functions g, relation extraction performance can also be further improved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adding long-range features",
                "sec_num": null
            },
            {
                "text": "We now describe the features we use in the baseline linear-chain CRF model and our modified model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Features",
                "sec_num": "4.4"
            },
            {
                "text": "The linear-chain features are those that can be formulated as f (y i\u22121 , y i , x), i.e., those that depend on x and two consecutive labels only. We use typical features that include tokens, POS tags and phrase boundary information coupled with label values. Let t i denote the POS tag of w i and p i denote the phrase boundary tag of w i . The phrase boundary tags also follow the BIO notation. Examples include B-NP, I-VP, etc. Table 2 shows the feature templates covering only the observations. Each feature shown in Table 2 is further combined with either the value of the current label y i or the values of the previous and the current labels y i\u22121 and y i to form zeroth order and first order features. For example, a zeroth order feature is \"y i is B-REL and w i is the and w i+1 is president\", and a first order feature is \"y i\u22121 is O and y i is B-REL and t i is N\".",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 429,
                        "end": 436,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 519,
                        "end": 526,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Linear-chain features",
                "sec_num": null
            },
            {
                "text": "Long-range features are those that cannot be defined based on only two consecutive labels. When defining long-range features, we treat the whole relation descriptor sequence as a single unit, denoted as REL. Given a label sequence y that contains a relation descriptor sequence, let (w r , w r+1 , . . . , w s ) denote the relation descriptor, that is, y r = B-REL and y t = I-REL where r + 1 \u2264 t \u2264 s. The long-range features we use are categorized and summarized in Table 3 . These features capture the context of the entire relation descriptor, its relation to the two arguments, and whether the boundary of the relation descriptor conforms to the phrase boundaries (since we expect that most relation descriptors consist of a single or a sequence of phrases).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 467,
                        "end": 474,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Long-range features",
                "sec_num": null
            },
            {
                "text": "Since the task of extracting relation descriptors is new, we are not aware of any data set that can be directly used to evaluate our methods. We therefore annotated two data sets for evaluation, one for the general \"employment\" relation and the other for the general \"personal/social\" relation. 2 The first data set contains 150 business articles from New York Times. The articles were crawled from the NYT website between November 2009 Description Feature Template Example single token wi+j (\u22122 \u2264 j \u2264 2) wi+1 (next token) is president single POS tag ti+j (\u22122 \u2264 j \u2264 2) ti (current POS tag) is DET single phrase tag pi+j (\u22122 \u2264 j \u2264 2) pi\u22121 (previous phrase boundary tag) is I-NP two consecutive tokens wi+j\u22121&wi+j (\u22121 \u2264 j \u2264 2) wi is the and wi+1 is president two consecutive POS tags ti+j\u22121&ti+j (\u22121 \u2264 j \u2264 2) ti is DET and ti+1 is N two consecutive phrase tags pi+j\u22121&pi+j (\u22121 \u2264 j \u2264 2) pi is B-NP and pi+1 is I-NP Table 2 : Linear-chain feature templates. Each feature is defined with respect to a particular (current) position in the sequence. i indicates the current position and j indicates the position relative to the current position. All features are defined using observations within a window size of 5 of the current position.",
                "cite_spans": [
                    {
                        "start": 295,
                        "end": 296,
                        "text": "2",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 912,
                        "end": 919,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Data Preparation",
                "sec_num": "5.1"
            },
            {
                "text": "Contextual Features word wr\u22121 or POS tag tr\u22121 preceding relation descriptor , REL word ws+1 or POS tag ts+1 following relation descriptor REL PREP",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "Path-based Features word or POS tag sequence between ARG1 and relation descriptor ARG1 is REL word or POS tag sequence between ARG2 and relation descriptor REL PREP ARG2 word or POS tag sequence containing ARG1, ARG2 and relation descriptor ARG2 's REL , ARG1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "Phrase Boundary whether relation descriptor violates phrase boundaries 1 or 0 Feature Table 3 : Long-range feature templates. r and s are the indices of the first word and the last word of the relation descriptor, respectively. and January 2010. After sentence segmentation and tokenization, we used the Stanford NER tagger (Finkel et al., 2005) to identify PER and ORG named entities from each sentence. For named entities that contain multiple tokens we concatenated them into a single token. We then took each pair of (PER, ORG) entities that occur in the same sentence as a single candidate relation instance, where the PER entity is treated as ARG-1 and the ORG entity is treated as ARG-2.",
                "cite_spans": [
                    {
                        "start": 324,
                        "end": 345,
                        "text": "(Finkel et al., 2005)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 86,
                        "end": 93,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "The second data set comes from a Wikipedia personal/social relation data set previously used in (Culotta et al., 2006) . The original data set does not contain annotations of relation descriptors such as \"sister\" or \"friend\" between the two PER arguments. We therefore also manually annotated this data set. Similarly, we performed sentence segmentation, tokenization and NER tagging, and took each pair of (PER, PER) entities occurring in the same sentence as a candidate relation instance. Because both arguments involved in the \"personal/social\" relation are PER entities, we always treat the first PER entity as ARG-1 and the second PER entity as ARG-2. 3 We go through each candidate relation instance to find whether there is an explicit sequence of words describing the relation between ARG-1 and ARG-2, and label the sequence of words, if any. Note that we only consider explicitly stated relation descriptors. If we cannot find such a relation descriptor, even if ARG-1 and ARG-2 actually have some kind of relation, we still label the instance as Nil. For example, in the instance \"he is the son of ARG1 and ARG2\", although we can infer that ARG-1 and ARG-2 have some family relation, we regard this as a negative instance.",
                "cite_spans": [
                    {
                        "start": 96,
                        "end": 118,
                        "text": "(Culotta et al., 2006)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 658,
                        "end": 659,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "A relation descriptor may also contain multiple relations. For example, in the instance \"ARG1 is the CEO and president of ARG2\", we label \"the CEO and president\" as the relation descriptor, which actually contains two job titles, namely, CEO and president.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "Note that our annotated relation descriptors are not always nouns or noun phrases. An example is the third instance for personal/social relation in Table 1 , where the relation descriptor \"married\" is a verb and indicates a spouse relation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 148,
                        "end": 155,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "The total number of relation instances, the number of positive and negative instances as well as the number of distinct relation descriptors in each data set are summarized in Table 4 . ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 176,
                        "end": 183,
                        "text": "Table 4",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Category Feature Template Description Example",
                "sec_num": null
            },
            {
                "text": "We compare the following methods in our experiments:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "\u2022 LC-CRF: This is the standard linear-chain CRF model with features described in Table 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 81,
                        "end": 88,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "\u2022 M-CRF-1: This is our modified linear-chain CRF model with the space of label sequences reduced but with features fixed to the same as those used in LC-CRF.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "\u2022 M-CRF-2: This is M-CRF-1 with the addition of the contextual long-range features described in Table 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 96,
                        "end": 103,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "\u2022 M-CRF-3: This is M-CRF-2 with the addition of the path-based long-range features described in Table 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 96,
                        "end": 103,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "\u2022 M-CRF-4: This is M-CRF-3 with the addition of the phrase boundary long-range feature described in Table 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 100,
                        "end": 107,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "For the standard linear-chain CRF model, we use the package CRF++ 4 . We implement our own version of the modified linear-chain CRF models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "We perform 10-fold cross validation for all our experiments. For each data set we first randomly divide it into 10 subsets. Each time we take 9 subsets for training and the remaining subset for testing. We report the average performance across the 10 runs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "Based on our preliminary experiments, we have found that using a smaller set of general POS tags instead of the Penn Treebank POS tag set could slightly improve the overall performance. We therefore only report the performance obtained using our POS tags. For example, we group NN, NNP, NNS and NNPS of the Penn Treebank set under a general tag N.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "We evaluate the performance using two different criteria: overlap match and exact match. Overlap match is a more relaxed criterion: if the extracted relation descriptor overlaps with the true relation descriptor (i.e., having at least one token in common), it is considered correct. Exact match is a much stricter criterion: it requires that the extracted relation descriptor be exactly the same as the true relation descriptor in order to be considered correct. Given these two criteria, we can define accuracy, precision, recall and F1 measures. Accuracy is the percentage of candidate relation instances whose label sequence is considered correct. Both positive and negative instances are counted when computing accuracy. Because our data sets are quite balanced, it is reasonable to use accuracy. Precision, recall and F1 are defined in the usual way at the relation instance level.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment Setup",
                "sec_num": "5.2"
            },
            {
                "text": "In Table 5 , we summarize the performance in terms of the various measures on the two data sets. For both the baseline linear-chain CRF model and our modified linear-chain CRF models, we have tuned the regularization parameters and show only the results using the optimal parameter values for each data set, chosen from \u03b2 = 10 \u03b3 for \u03b3 \u2208 [\u22123, \u22122, . . . , 2, 3].",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Method Comparison",
                "sec_num": "5.3"
            },
            {
                "text": "First, we can see from the table that by reducing the label sequence space, M-CRF-1 can significantly outperform the baseline LC-CRF in terms of F1 in all cases. In terms of accuracy, there is significant improvement for the NYT data set but not for the Wikipedia data set. We also notice that for both data sets the advantage of M-CRF-1 is mostly evident in the improvement of recall. This shows that a larger number of true relation descriptors are extracted when the label sequence space is reduced.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method Comparison",
                "sec_num": "5.3"
            },
            {
                "text": "Next we see from the table that long-range features are also useful, and the improvement comes mostly from the path-based long-range features. In terms of both accuracy and F1, M-CRF-3 can significantly outperform M-CRF-1 in all settings. In this case, the improvement is a mixture of both precision and recall. This shows that by explicitly capturing the patterns between the two arguments and the relation descriptor, we can largely improve the extraction performance. On the other hand, neither the contextual long-range features nor the phrase boundary long-range features exhibit any Table 5 : Comparison of different methods on the New York Times data set and Wikipedia data set. Accu., Prec., Rec. and F1 stand for accuracy, precision, recall and F1 measures, respectively. \u2020 indicates that the current value is statistically significantly better than the value in the previous row at a 0.95 level of confidence by one-tailed paired T-test. significant impact. We hypothesize the following. For contextual long-range features, they have already been captured in the linear-chain features. For example, the long-range feature \"is REL\" is similar to the linear-chain feature \"w i\u22121 = is & y i = B-R\". For the phrase boundary long-range feature, since phrase boundary tags have also been used in the linear-chain features, this feature does not provide additional information. In addition, we have found that a large percentage of relation descriptors violate phrase boundaries: 22% in the NYT data set, and 29% in the Wikipedia data set. Therefore, it seems that phrase boundary information is not important for relation descriptor extraction. Overall, performance is much higher on the NYT data set than on the Wikipedia data set. Based on our observations during annotation, this is due to the fact that the \"employment\" relations expressed in the NYT data set often follow some standard patterns, whereas in Wikipedia the \"personal/social\" relations can be expressed in more varied ways. The lower performance achieved on the Wikipedia data set suggests that extracting relation descriptors is not an easy task even under a supervised learning setting.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 589,
                        "end": 596,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Method Comparison",
                "sec_num": "5.3"
            },
            {
                "text": "Presumably relation descriptors that are not seen in the training data are harder to extract. We would therefore also like to see how well our model works on such unseen relation descriptors. We find that with 10-fold cross validation, for the NYT data set, on average our model is able to extract approximately 67% of the unseen relation descriptors in the test data using exact match criteria. For the Wikipedia data set this percentage is approximately 27%. Both numbers are lower than the overall recall values the model can achieve on the entire test data, showing that unseen relation descriptors are indeed harder to extract. However, our model is still able to pick up new relation descriptors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method Comparison",
                "sec_num": "5.3"
            },
            {
                "text": "In the previous experiments, we have used 90% of the data for training and the remaining 10% for testing. We now take a look at how the performance changes with different numbers of training instances. We vary the training data size from only a few instances (2, 5, and 10) to 20%, 40%, 60% and 80% of the entire data set. The results are shown in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 348,
                        "end": 356,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Effect of Training Data Size",
                "sec_num": "5.4"
            },
            {
                "text": "As we can expect, when the number of training instances is small, the performance on both data sets is low. The figure also shows that the Wikipedia data set is the more difficult than the NYT data set. This is consistent with our observation in the previous section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Effect of Training Data Size",
                "sec_num": "5.4"
            },
            {
                "text": "The modified linear-chain CRF model consistently outperforms the baseline linear-chain CRF model. For similar level of performance, the modified linear-chain CRF model requires less training data than the baseline linear-chain CRF model. For example, Figure 1(b) shows that the modified linear-chain CRF model achieve 0.72 F1 with about 215 training instances, while the baseline linear-chain CRF model requires about 480 training instances for a similar F1.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 251,
                        "end": 262,
                        "text": "Figure 1(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Effect of Training Data Size",
                "sec_num": "5.4"
            },
            {
                "text": "In this paper, we studied relation extraction under a new setting: the relation types are defined at a general level but more specific relation descriptors are desired. Based on the special properties of this new task, we found that standard linear-chain CRF models have some potential limitations for this task. We subsequently proposed some modifications to linear-chain CRFs in order to suit our task better. We annotated two data sets to evaluate our methods. The experiments showed that by restricting the space of possible label sequences and introducing certain long-range features, the performance of the modified linear-chain CRF model can perform significantly better than standard linear-chain CRFs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "Currently our work is only based on evaluation on two data sets and on two general relations. In the future we plan to evaluate the methods on other general relations to test its robustness. We also plan to explore how this new relation extraction task can be used within other NLP or text mining applications.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "Automatic Content Extraction http://www.itl. nist.gov/iad/mig/tests/ace/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://www.mysmu.edu/faculty/ jingjiang/data/IJCNLP2011.zip",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Since many personal/social relations are asymmetric, ideally we should assign ARG-1 and ARG-2 based on their semantic meanings rather than their positions. Here we take a simple approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://crfpp.sourceforge.net/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This material is based on research sponsored by the Air Force Research Laboratory, under agreement number FA2386-09-1-4123. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Snowball: Extracting relations from large plain-text collections",
                "authors": [
                    {
                        "first": "Eugene",
                        "middle": [],
                        "last": "Agichtein",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Gravano",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the Fifth ACM Conference on Digital Libraries",
                "volume": "",
                "issue": "",
                "pages": "85--94",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In Proceedings of the Fifth ACM Confer- ence on Digital Libraries, pages 85-94, June.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The tradeoffs between open and traditional relation extraction",
                "authors": [
                    {
                        "first": "Michele",
                        "middle": [],
                        "last": "Banko",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Etzioni",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "28--36",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 28-36.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A shortest path dependency kernel for relation extraction",
                "authors": [
                    {
                        "first": "Razvan",
                        "middle": [],
                        "last": "Bunescu",
                        "suffix": ""
                    },
                    {
                        "first": "Raymond",
                        "middle": [],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "724--731",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proceedings of the Human Language Tech- nology Conference and the Conference on Empiri- cal Methods in Natural Language Processing, pages 724-731, October.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Integrating probabilistic extraction models and data mining to discover relations and patterns in text",
                "authors": [
                    {
                        "first": "Aron",
                        "middle": [],
                        "last": "Culotta",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathan",
                        "middle": [],
                        "last": "Betz",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "296--303",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 296-303, June.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Incorporating non-local information into information extraction systems by gibbs sampling",
                "authors": [
                    {
                        "first": "Jenny",
                        "middle": [
                            "Rose"
                        ],
                        "last": "Finkel",
                        "suffix": ""
                    },
                    {
                        "first": "Trond",
                        "middle": [],
                        "last": "Grenager",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "363--370",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christo- pher D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Lin- guistics, pages 363-370, June.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Discovering relations among named entities from large corpora",
                "authors": [
                    {
                        "first": "Takaaki",
                        "middle": [],
                        "last": "Hasegawa",
                        "suffix": ""
                    },
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    },
                    {
                        "first": "Ralph",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "415--422",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grish- man. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 415-422, July.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Learning 5000 relational extractors",
                "authors": [
                    {
                        "first": "Raphael",
                        "middle": [],
                        "last": "Hoffmann",
                        "suffix": ""
                    },
                    {
                        "first": "Congle",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [
                            "S"
                        ],
                        "last": "Weld",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "286--295",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of the 48th Annual Meeting of the As- sociation for Computational Linguistics, pages 286- 295, July.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
                "authors": [
                    {
                        "first": "John",
                        "middle": [
                            "D"
                        ],
                        "last": "Lafferty",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    },
                    {
                        "first": "Fernando",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of the 18th International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "282--289",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, pages 282- 289, June.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Text chunking using transformation-based learning",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Lance",
                        "suffix": ""
                    },
                    {
                        "first": "Mitchell",
                        "middle": [
                            "P"
                        ],
                        "last": "Ramshaw",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Marcus",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the Third ACL Workshop on Very Large Corpora",
                "volume": "",
                "issue": "",
                "pages": "82--94",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third ACL Workshop on Very Large Corpora, pages 82-94.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "URES : An unsupervised Web relation extraction system",
                "authors": [
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Rosenfeld",
                        "suffix": ""
                    },
                    {
                        "first": "Ronen",
                        "middle": [],
                        "last": "Feldman",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "667--674",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Benjamin Rosenfeld and Ronen Feldman. 2006. URES : An unsupervised Web relation extraction system. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, pages 667-674, July.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Semi-Markov conditional random fields for information extraction",
                "authors": [
                    {
                        "first": "Sunita",
                        "middle": [],
                        "last": "Sarawagi",
                        "suffix": ""
                    },
                    {
                        "first": "William",
                        "middle": [
                            "W"
                        ],
                        "last": "Cohen",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "17",
                "issue": "",
                "pages": "1185--1192",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sunita Sarawagi and William W. Cohen. 2005. Semi- Markov conditional random fields for information extraction. In Advances in Neural Information Pro- cessing Systems 17, pages 1185-1192.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Preemptive information extraction using unrestricted relation discovery",
                "authors": [
                    {
                        "first": "Yusuke",
                        "middle": [],
                        "last": "Shinyama",
                        "suffix": ""
                    },
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "304--311",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive information extraction using unrestricted rela- tion discovery. In Proceedings of the Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 304-311, June.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Open information extraction using Wikipedia",
                "authors": [
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Daniel",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Weld",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "118--127",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fei Wu and Daniel S. Weld. 2010. Open information extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 118-127, July.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Kernel methods for relation extraction",
                "authors": [
                    {
                        "first": "Dmitry",
                        "middle": [],
                        "last": "Zelenko",
                        "suffix": ""
                    },
                    {
                        "first": "Chinatsu",
                        "middle": [],
                        "last": "Aone",
                        "suffix": ""
                    },
                    {
                        "first": "Anthony",
                        "middle": [],
                        "last": "Richardella",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Journal of Machine Learning Research",
                "volume": "3",
                "issue": "",
                "pages": "1083--1106",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. Journal of Machine Learning Research, 3:1083-1106.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Exploring syntactic features for relation extraction using a convolution tree kernel",
                "authors": [
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Jie",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Jian",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "288--295",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Min Zhang, Jie Zhang, and Jian Su. 2006. Explor- ing syntactic features for relation extraction using a convolution tree kernel. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 288-295, June.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Exploring various knowledge in relation extraction",
                "authors": [
                    {
                        "first": "Guodong",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Jian",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "Jie",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "427--434",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation ex- traction. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguis- tics, pages 427-434, June.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "text": "Performance of LC-CRF and M-CRF-3 as the training data size increases.",
                "num": null,
                "type_str": "figure"
            },
            "TABREF0": {
                "text": "ARG-1 , a vice president at ARG-2 , which ... a",
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>Relation</td><td>Candidate Relation Instance</td><td>Relation Descriptor</td></tr><tr><td>Employment</td><td>... said</td><td/></tr></table>",
                "num": null
            },
            "TABREF2": {
                "text": "Number of instances in each data set. Positive instances are those that have an explicit relation descriptor. The last column shows the number of distinct relation descriptors.",
                "html": null,
                "type_str": "table",
                "content": "<table/>",
                "num": null
            }
        }
    }
}