File size: 76,451 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
{
    "paper_id": "I08-1005",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:41:06.664311Z"
    },
    "title": "Semi-Supervised Learning for Relation Extraction",
    "authors": [
        {
            "first": "Guodong",
            "middle": [],
            "last": "Zhou",
            "suffix": "",
            "affiliation": {
                "laboratory": "Jiangsu Provincial Key Lab for Computer Information Processing Technology",
                "institution": "Soochow Univ",
                "location": {
                    "postCode": "215006",
                    "settlement": "Suzhou",
                    "country": "China"
                }
            },
            "email": "gdzhou@suda.edu.cn"
        },
        {
            "first": "Junhui",
            "middle": [],
            "last": "Li",
            "suffix": "",
            "affiliation": {
                "laboratory": "Jiangsu Provincial Key Lab for Computer Information Processing Technology",
                "institution": "Soochow Univ",
                "location": {
                    "postCode": "215006",
                    "settlement": "Suzhou",
                    "country": "China"
                }
            },
            "email": "lijunhui@suda.edu.cn"
        },
        {
            "first": "Longhua",
            "middle": [],
            "last": "Qian",
            "suffix": "",
            "affiliation": {
                "laboratory": "Jiangsu Provincial Key Lab for Computer Information Processing Technology",
                "institution": "Soochow Univ",
                "location": {
                    "postCode": "215006",
                    "settlement": "Suzhou",
                    "country": "China"
                }
            },
            "email": "qianlonghua@suda.edu.cn"
        },
        {
            "first": "Qiaoming",
            "middle": [],
            "last": "Zhu",
            "suffix": "",
            "affiliation": {
                "laboratory": "Jiangsu Provincial Key Lab for Computer Information Processing Technology",
                "institution": "Soochow Univ",
                "location": {
                    "postCode": "215006",
                    "settlement": "Suzhou",
                    "country": "China"
                }
            },
            "email": "qmzhu@suda.edu.cn"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper proposes a semi-supervised learning method for relation extraction. Given a small amount of labeled data and a large amount of unlabeled data, it first bootstraps a moderate number of weighted support vectors via SVM through a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm via the bootstrapped support vectors. Evaluation on the ACE RDC 2003 corpus shows that our method outperforms the normal LP algorithm via all the available labeled data without SVM bootstrapping. Moreover, our method can largely reduce the computational burden. This suggests that our proposed method can integrate the advantages of both SVM bootstrapping and label propagation.",
    "pdf_parse": {
        "paper_id": "I08-1005",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper proposes a semi-supervised learning method for relation extraction. Given a small amount of labeled data and a large amount of unlabeled data, it first bootstraps a moderate number of weighted support vectors via SVM through a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm via the bootstrapped support vectors. Evaluation on the ACE RDC 2003 corpus shows that our method outperforms the normal LP algorithm via all the available labeled data without SVM bootstrapping. Moreover, our method can largely reduce the computational burden. This suggests that our proposed method can integrate the advantages of both SVM bootstrapping and label propagation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Relation extraction is to detect and classify various predefined semantic relations between two entities from text and can be very useful in many NLP applications such as question answering, e.g. to answer the query \"Who is the president of the United States?\", and information retrieval, e.g. to expand the query \"George W. Bush\" with \"the president of the United States\" via his relationship with \"the United States\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "During the last decade, many methods have been proposed in relation extraction, such as supervised learning (Miller et al 2000; Zelenko et al 2003; Culota and Sorensen 2004; Zhao and Grishman 2005; Zhang et al 2006; Zhou et al 2005 Zhou et al , 2006 , semi-supervised learning (Brin 1998; Agichtein and Gravano 2000; Zhang 2004; Chen et al 2006) , and unsupervised learning (Hasegawa et al 2004; Zhang et al 2005) . Among these methods, supervised learning-based methods perform much better than the other two alternatives. However, their performance much depends on the availability of a large amount of manually labeled data and it is normally difficult to adapt an existing system to other applications and domains. On the other hand, unsupervised learning-based methods do not need the definition of relation types and the availability of manually labeled data. However, they fail to classify exact relation types between two entities and their performance is normally very low. To achieve better portability and balance between human efforts and performance, semi-supervised learning has drawn more and more attention recently in relation extraction and other NLP applications.",
                "cite_spans": [
                    {
                        "start": 108,
                        "end": 127,
                        "text": "(Miller et al 2000;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 128,
                        "end": 147,
                        "text": "Zelenko et al 2003;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 148,
                        "end": 173,
                        "text": "Culota and Sorensen 2004;",
                        "ref_id": null
                    },
                    {
                        "start": 174,
                        "end": 197,
                        "text": "Zhao and Grishman 2005;",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 198,
                        "end": 215,
                        "text": "Zhang et al 2006;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 216,
                        "end": 231,
                        "text": "Zhou et al 2005",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 232,
                        "end": 249,
                        "text": "Zhou et al , 2006",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 277,
                        "end": 288,
                        "text": "(Brin 1998;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 289,
                        "end": 316,
                        "text": "Agichtein and Gravano 2000;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 317,
                        "end": 328,
                        "text": "Zhang 2004;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 329,
                        "end": 345,
                        "text": "Chen et al 2006)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 374,
                        "end": 395,
                        "text": "(Hasegawa et al 2004;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 396,
                        "end": 413,
                        "text": "Zhang et al 2005)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This paper proposes a semi-supervised learning method for relation extraction. G iven a small amount of labeled data and a large amount of unlabeled data, our proposed method first bootstraps a moderate number of weighted support vectors from all the available data via SVM using a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm to capture the manifold structure in both the labeled and unlabeled data via the bootstrapped support vectors. Compared with previous methods, our method can integrate the advantages of both SVM bootstrapping in learning critical instances for the labeling function and label propagation in capturing the manifold structure in both the labeled and unlabeled data to smooth the labeling function.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The rest of this paper is as follows. In Section 2, we review related semi-supervised learning work in relation extraction. Then, the LP algorithm via bootstrapped support vectors is proposed in Section 3 while Section 4 shows the experimental results. Finally, we conclude our work in Section 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Generally, supervised learning is preferable to unsupervised learning due to prior knowledge in the annotated training data and better performance. However, the annotated data is usually expensive to obtain. Hence, there has been growing interest in semi-supervised learning, aiming at inducing classifiers by leveraging a small amount of labeled data and a large amount of unlabeled data. Related work in relation extraction using semi-supervised learning can be classified into two categories: bootstrapping-based (Brin 1998; Agichtein and Gravano 2000; Zhang 2004 ) and label propagation(LP)-based (Chen et al 2006) .",
                "cite_spans": [
                    {
                        "start": 516,
                        "end": 527,
                        "text": "(Brin 1998;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 528,
                        "end": 555,
                        "text": "Agichtein and Gravano 2000;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 556,
                        "end": 566,
                        "text": "Zhang 2004",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 601,
                        "end": 618,
                        "text": "(Chen et al 2006)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Currently, bootstrapping-based methods dominate semi-supervised learning in relation extraction. Bootstrapping works by iteratively classifying unlabeled instances and adding confidently classified ones into labeled data using a model learned from augmented labeled data in previous iteration. Brin (1998) proposed a bootstrapping-based method on the top of a self-developed pattern matching-based classifier to exploit the duality between patterns and relations. Agichtein and Gravano (2000) shared much in common with Brin (1998) . They employed an existing pattern matching-based classifier (i.e. SNoW) instead. Zhang (2004) approached the much simpler relation classification sub-task by bootstrapping on the top of SVM. Although bootstrapping-based methods have achieved certain success, one problem is that they may not be able to well capture the manifold structure among unlabeled data.",
                "cite_spans": [
                    {
                        "start": 294,
                        "end": 305,
                        "text": "Brin (1998)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 464,
                        "end": 492,
                        "text": "Agichtein and Gravano (2000)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 520,
                        "end": 531,
                        "text": "Brin (1998)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 615,
                        "end": 627,
                        "text": "Zhang (2004)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "As an alternative to the bootstrapping-based methods, Chen et al (2006) employed a LP-based method in relation extraction. Compared with bootstrapping, the LP algorithm can effectively combine labeled data with unlabeled data in the learning process by exploiting the manifold structure (e.g. the natural clustering structure) in both the labeled and unlabeled data. The rationale behind this algorithm is that the instances in highdensity areas tend to carry the same labels. The LP algorithm has also been successfully applied in other NLP applications, such as word sense disambiguation (Niu et al 2005) , text classification (Szummer and Jaakkola 2001; Blum and Chawla 2001; Belkin and Niyogi 2002; Zhu and Ghahramani 2002; Zhu et al 2003; Blum et al 2004) , and information retrieval (Yang et al 2006) . However, one problem is its computational burden, especially when a large amount of labeled and unlabeled data is taken into consideration.",
                "cite_spans": [
                    {
                        "start": 54,
                        "end": 71,
                        "text": "Chen et al (2006)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 590,
                        "end": 606,
                        "text": "(Niu et al 2005)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 629,
                        "end": 656,
                        "text": "(Szummer and Jaakkola 2001;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 657,
                        "end": 678,
                        "text": "Blum and Chawla 2001;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 679,
                        "end": 702,
                        "text": "Belkin and Niyogi 2002;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 703,
                        "end": 727,
                        "text": "Zhu and Ghahramani 2002;",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 728,
                        "end": 743,
                        "text": "Zhu et al 2003;",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 744,
                        "end": 760,
                        "text": "Blum et al 2004)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 789,
                        "end": 806,
                        "text": "(Yang et al 2006)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "In order to take the advantages of both bootstrapping and label propagation, our proposed method propagates labels via bootstrapped support vectors. On the one hand, our method can well capture the manifold structure in both the labeled and unlabeled data. On the other hand, our method can largely reduce the computational burden in the normal LP algorithm via all the available data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The idea behind our LP algorithm via bootstrapped support vectors is that, instead of propagating labels through all the available labeled data, our method propagates labels through critical instances in both the labeled and unlabeled data. In this paper, we use SVM as the underlying classifier to bootstrap a moderate number of weighted support vectors for this purpose. This is based on an assumption that the manifold structure in both the labeled and unlabeled data can be well preserved through the critical i nstances (i.e. the weighted support vectors bootstrapped from all the available labeled and unlabeled data). The reason why we choose SVM is that it represents the state-of-theart in machine learning research and there are good implementations of the algorithm available. In particular, SVMLight (Joachims 1998) is selected as our classifier. For efficiency, we apply the one vs. others strategy, which builds K classifiers so as to separate one class from all others. Another reason is that we can adopt the weighted support vectors returned by the bootstrapped SVMs as the critical instances, via which label propagation is done.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label ropagation via Bootstrapped Support Vectors",
                "sec_num": "3"
            },
            {
                "text": "This paper modifies the SVM bootstrapping algorithm BootProject (Zhang 2004) to bootstrap support vectors. Given a small amount of labeled data and a large amount of unlabeled data, the modified BootProject algorithm bootstraps on the top of SVM by iteratively classifying unlabeled instances and moving confidently classified ones into labeled data using a model learned from the augmented labeled data in previous iteration, until not enough unlabeled instances can be classified confidently. Figure 1 shows the modified BootProject algorithm for bootstrapping support vectors.",
                "cite_spans": [
                    {
                        "start": 64,
                        "end": 76,
                        "text": "(Zhang 2004)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 495,
                        "end": 503,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Bootstrapping Support Vectors",
                "sec_num": "3.1"
            },
            {
                "text": "L : the labeled data; U : the unlabeled data; S : the batch size (100 in our experiments); P : the number of views(feature projections); r : the number of classes (including all the relation (sub)types and the non-relation) BEGIN REPEAT FOR i = 1 to P DO Generate projected feature space i F from the original feature space F ; Project both L and U onto i F , thus gener-",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "ate i L and i U ; Train SVM classifier ij SVM on i L for each class ) 1 ( r j r j K = ; Run ij SVM on i U for each class ) 1 ( r j r j K = END FOR",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "Find (at most) S instances in U with the highest agreement (with threshold 70% in our experiments) and the highest average SVM-returned confidence value (with threshold 1.0 in our experiments); Move them from U to L; UNTIL not enough unlabeled instances (less than 10 in our experiments) can be confidently classified;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "Return all the (positive and negative) support vectors included in all the latest SVM classifiers ij SVM with their collective weight (absolute alpha*y) information as the set of bootstrapped support vectors to act as the labeled data in the LP algorithm;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "Return U (those hard cases which can not be confidently classified) to act as the unlabeled data in the LP algorithm;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "END _________________________________________ Figure 1 : The algorithm for bootstrapping support vectors In particular, this algorithm generates multiple overlapping \"views\" by projecting from the original feature space. In this paper, feature views with random feature projection, as proposed in Zhang (2004) , are explored. Section 4 will discuss this issue in more details. During the iterative training process, classifiers trained on the augmented labeled data using the projected views are then asked to vote on the remaining unlabeled instances and those with the highest probability of being correctly labeled are chosen to augment the labeled data.",
                "cite_spans": [
                    {
                        "start": 297,
                        "end": 309,
                        "text": "Zhang (2004)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 46,
                        "end": 54,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "During the bootstrapping process, the support vectors included in all the trained SVM classifiers (for all the relation (sub)types and the non-relation) are bootstrapped (i.e. updated) at each iteration. When the bootstrapping process stops, all the (positive and negative) support vectors included in the SVM classifiers are returned as bootstrapped support vectors with their collective weights (absolute a*y) to act as the labeled data in the LP algorithm and all the remaining unlabeled instances (i.e. those hard cases which can not be confidently classified in the bootstrapping process) in the unlabeled data are returned to act as the unlabeled data in the LP algorithm. Through SVM bootstrapping, our LP algorithm will only depend on the critical instances (i.e. support vectors with their weight information bootstrapped from all the available labeled and unlabeled data) and those hard i nstances, instead of all the available labeled and unlabeled data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_________________________________________ Assume:",
                "sec_num": null
            },
            {
                "text": "In the LP algorithm (Zhu and Ghahramani 2002) , the manifold structure in data is represented as a connected graph. Given the labeled data (the above bootstrapped support vectors with their weights) and unlabeled data (the remaining hard instances in the unlabeled data after bootstrapping, including all the test instances for evaluation), the LP algorithm first represents labeled and unlabeled instances as vertices in a connected graph, then propagates the label information from any vertex to nearby vertex through weighted edges and finally infers the labels of unlabeled instances until a global stable stage is achieved. Figure 2 presents the label propagation algorithm on bootstrapped support vectors in details. Clamp the labeled data, that is, replace",
                "cite_spans": [
                    {
                        "start": 20,
                        "end": 45,
                        "text": "(Zhu and Ghahramani 2002)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 629,
                        "end": 637,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Label Propagation",
                "sec_num": "3.2"
            },
            {
                "text": "1 + t L Y with 0 L Y ; UNTIL Y converges(e.g. 1 + t L Y converges to 0 L Y );",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label Propagation",
                "sec_num": "3.2"
            },
            {
                "text": "Assign each unlabeled instance with a label: for During the label propagation process, the label distribution of the labeled data is clamped in each loop using the weights of the bootstrapped support vectors and acts like forces to push out labels through the unlabeled data. With this push originates from the labeled data, the label boundaries will be pushed much faster along edges with larger weights and settle in gaps along those with lower weights. Ideally, we can expect that ij w across different classes should be as small as possible and ij w within the same class as big as possible. In this way, label propagation happens within the same class most likely. This algorithm has been shown to converge to a unique solution (Zhu and Ghahramani 2002) , which can be obtained without iteration in theory, and the initialization of Y U 0 (the unlabeled data) is not important since Y U 0 does not affect its estimation. However, proper initialization of Y U 0 actually helps the algorithm converge more rapidly in practice. In this paper, each row in Y U 0 is initialized to the average similarity with the labeled instances. In all our experiments, we iterate over all pairs of entity mentions occurring in the same sentence to generate potential relation instances 1 . For better evaluation, we have adopted a state-of-the-art linear kernel as similarity measurements. In our linear kernel, we apply the same feature set as described in a state-of-the-art feature-based system : word, entity type, mention level, overlap, base phrase chunking, dependency tree, parse tree and semantic information. Given above various lexical, syntactic and semantic features, multiple overlapping feature views are generated in the bootstrapping process using random feature projection (Zhang 2004) . For each feature projection in bootstrapping support vectors, a feature is randomly selected with probability p and therefore the eventually projected feature space has p*F features on average, where F is the size of the original feature space. In this paper, p and the number of different views are fine-tuned to 0.5 and 10 2 respectively using 5-fold cross validation on the training data of the ACE RDC 2003 corpus. Table 1 presents the F-measures 3 (the numbers outside the parentheses) of our algorithm using the state-of-the-art linear kernel on different sizes of the ACE RDC training data with all the remaining training data and the test data 4 as the unlabeled data on the ACE RDC 2003 corpus. In this paper, we only report the performance (averaged over 5 trials) with the percentages of 5%, 10%, 25%, 50%, 75% and 100% 5 . For example, our LP algorithm via bootstrapped (weighted) support vectors achieves the F-measure of 46.5 if using only 5% of the ACE RDC 2003 training data as the labeled data and the remaining training data and the test data in this corpus as the unlabeled data. Table 1 also compares our method with SVM and the original SVM bootstrapping algorithm BootProject(i.e. bootstrapping on the top of SVM with feature projection, as proposed in Zhang (2004) ). Finally, Table 1 compares our LP algorithm via bootstrapped (weighted by default) support vectors with other possibilities, such as the scheme via bootstrapped (un-weighted, i.e. the importance of support vectors is not differentiated) support vectors and the scheme via all the available labeled data (i.e. without SVM bootstrapping). Table 1 shows that: 1) Inclusion of unlabeled data using semisupervised learning, including the SVM bootstrapping algorithm BootProject, the normal LP algorithm via all the available labeled and unlabeled data without SVM bootstrapping, and our LP algorithms via bootstrapped (either weighted or un-weighted) support vectors, consistently improves the performance, a lthough semi-supervised learning has shown to typically decrease the performance when a lot of (enough) labeled data is available (Nigam 2001) . This may be due to the insufficiency of labeled data in the ACE RDC 2003 corpus. Actually, most of relation subtypes in the two corpora much suffer from the data sparseness problem (Zhou et al 2006) . 2) All the three LP algorithms outperform the state-of-the-art SVM classifier and the SVM bootstrapping algorithm BootProject. Especially, when a small amount of labeled data is available, the performance improvements by the LP algorithms are significant. This indicates the usefulness of the manifold structure in both labeled and unlabeled data and the powerfulness of the LP algorithm in modeling such information. 3) Our LP algorithms via bootstrapped (either weighted or un-weighted) support vectors outperforms the normal LP algorithm via all the available labeled data w/o SVM bootstrapping. For example, o ur LP algorithm via bootstrapped (weighted) support vectors outperforms the normal LP algorithm from 0.6 to 3.4 in F-measure on the ACE RDC 2003 corpus respectively when the labeled data ranges from 100% to 5%. This suggests that the manifold structure in both the labeled and unlabeled data can be well preserved via bootstrapped support vectors, especially when only a small amount of labeled data is available. This implies that weighted support vectors may represent the manifold structure (e.g. the decision boundary from where label propagation is done) better than the full set of data -an interesting result worthy more quantitative and qualitative justification in the future work. 4) Our LP algorithms via bootstrapped (weighted) support vectors perform better than LP algorithms via bootstrapped (un-weighted) support vectors by ~1.0 in F-measure on average. This suggests that bootstrapped support vectors with their weights can better represent the manifold structure in all the available labeled and unlabeled data than bootstrapped support vectors without their weights. 5) Comparison of SVM, SVM bootstrapping and label propagation with bootstrapped (weighted) support vectors shows that both bootstrapping and label propagation contribute much to the performance improvement. Table 1 also shows the increases in F-measure (the numbers inside the parentheses) if we add all the instances in the ACE RDC 2004 6 corpus into the ACE RDC 2003 corpus in consideration as unlabeled data in all the four semi-supervised learning methods. It shows that adding more unlabeled data can consistently improve the performance. For example, compared with using only 5% of the ACE RDC 2003 training data as the labeled data and the remaining training data and the test data in this corpus as the unlabeled data, including the ACE RDC 2004 corpus as the unlabeled data increases the F-measures of 1.4 and 1.0 in our LP algorithm and the normal LP algorithm respectively. Table 1 shows that the contribution grows first when the labeled data begins to increase and reaches a maximum of ~2.0 in F-measure at a certain point.",
                "cite_spans": [
                    {
                        "start": 733,
                        "end": 758,
                        "text": "(Zhu and Ghahramani 2002)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 1778,
                        "end": 1790,
                        "text": "(Zhang 2004)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 3068,
                        "end": 3080,
                        "text": "Zhang (2004)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 3917,
                        "end": 3929,
                        "text": "(Nigam 2001)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 4113,
                        "end": 4130,
                        "text": "(Zhou et al 2006)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 2212,
                        "end": 2219,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 2892,
                        "end": 2899,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 3093,
                        "end": 3100,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 3420,
                        "end": 3427,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 6040,
                        "end": 6047,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 6718,
                        "end": 6725,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Label Propagation",
                "sec_num": "3.2"
            },
            {
                "text": ") ( n i l x i \u2264 p",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Label Propagation",
                "sec_num": "3.2"
            },
            {
                "text": "Finally, it is found in our experiments that critical and hard instances normally occupy only 15~20% (~18% on average) of all the available labeled and unlabeled data. This suggests that, through bootstrapped support vectors, our LP algo-rithm can largely reduce the computational burden since it only depends on the critical instances (i.e. bootstrapped support vectors with their weights) and those hard instances.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "4.2"
            },
            {
                "text": "This paper proposes a new effective and efficient semi-supervised learning method in relation extraction. First, a moderate number of weighted support vectors are bootstrapped from all the available labeled and unlabeled data via SVM through a co-training procedure with feature projection. Here, a random feature projection technique is used to generate multiple overlapping feature views in bootstrapping using a state-of-the-art linear kernel. Then, a LP algorithm is applied to propagate labels via the bootstrapped support vectors, which, together with those hard unlabeled instances and the test instances, are represented as vertices in a connected graph. During the classification process, the label information is propagated from any vertex to nearby vertex through weighted edges and finally the labels of unlabeled instances are inferred until a global stable stage is achieved. In this way, the manifold structure in both the labeled and unlabeled data can be well captured by label propagation via bootstrapped support vectors. Evaluation on the ACE RDC 2004 corpus suggests that our LP algorithm via bootstrapped support vectors can take the advantages of both SVM bootstrapping and label propagation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "For the f uture work, we will systematically evaluate our proposed method on more corpora and explore better metrics of measuring the similarity between two instances.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "In this paper, we only measure the performance of relation extraction on \"true\" mentions with \"true\" chaining of co-reference (i.e. as annotated by the corpus annotators) in the ACE corpora. We also explicitly model the argument order of the two mentions involved and only model explicit relations because of poor inter-annotator agreement in the annotation of implicit relations and their limited number.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This suggests that the modified BootProject algorithm in the bootstrapping phase outperforms the SelfBoot algorithm (with p=1.0 and m=1) which uses all the features as the only view. In the related NLP literature, co-training has also shown to typically outperform self-bootstrapping.3 Our experimentation also shows that most of performance improvement with either bootstrapping or label propagation comes from gain in recall. Due to space limitation, this p aper only reports the overall Fmeasure.4 In our label propagation algorithm via bootstrapped support vectors, the test data is only included in the second phase (i.e. the label propagation phase) and not used in the first phase (i.e. bootstrapping support vectors). This is to fairly compare different semisupervised learning methods.5 We have tried less percentage than 5%. However, our experiments show that using much less data will suffer from performance un-stability. Therefore, we only report the performance with percentage not less than 5%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Compared with the ACE RDC 2003 task, the ACE RDC 2004 task defines two more entity types, i.e. weapon and vehicle, much more entity subtypes, and different 7 relation types and 23 subtypes between 7 entity types. The ACE RDC 2004 corpus from LDC contains 451 documents and 5702 relation instances.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This research is supported by Project 60673041 under the National Natural Science Foundation of China and Project 2006AA01Z147 under the \"863\" National High-Tech Research and Development of China.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Automatic Content Extraction",
                "authors": [],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "ACE. (2000-2005). Automatic Content Extraction. http://www.ldc.upenn.edu/Projects/ACE/",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Snowball: Extracting relations from large plain-text collections",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Agichtein",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Gravano",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 5 th ACM International Conference on Digital Libraries",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Agichtein E. and Gravano L. (2000). Snowball: Extracting relations from large plain-text collec- tions. Proceedings of the 5 th ACM International Conference on Digital Libraries (ACMDL'2000).",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Using Manifold Structure for Partially Labeled Classification",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Belkin",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Niyogi",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "NIPS",
                "volume": "15",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Belkin, M. and Niyogi, P. (2002). Using Manifold Structure for Partially Labeled Classification. NIPS 15.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Learning from labeled and unlabeled data using graph mincuts",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Blum",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Chawla",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Blum A. and Chawla S. (2001). Learning from la- beled and unlabeled data using graph mincuts. ICML'2001.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Semi-supervised learning using randomized mincuts",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Blum",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lafferty",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Reddy",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Blum A., Lafferty J., Rwebangira R and Reddy R. (2004). Semi-supervised learning using random- ized mincuts. ICML'2004.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Extracting patterns and relations from world wide web",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Brin",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of WebDB Workshop at 6 th International Conference on Extending Database Technology",
                "volume": "",
                "issue": "",
                "pages": "172--183",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brin S. (1998). Extracting patterns and relations from world wide web. Proceedings of WebDB Workshop at 6 th International Conference on Extending Database Technology:172-183.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Immediate-head Parsing for Language Models",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Charniak",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "129--137",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charniak E. (2001). Immediate-head Parsing for Language Models. ACL'2001: 129-137. Tou- louse, France",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Relation extraction using label propagation based semi-supervised learning",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "X"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "H"
                        ],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "L"
                        ],
                        "last": "Tan",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [
                            "Y"
                        ],
                        "last": "Niu",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "COLING-ACL",
                "volume": "",
                "issue": "",
                "pages": "129--136",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen J.X., Ji D.H., Tan C.L. and Niu Z.Y. (2006). Relation extraction using label propagation based semi-supervised learning. COLING- ACL'2006: 129-136. July 2006. Sydney, Austra- lia.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Dependency tree kernels for relation extraction",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Culotta",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Sorensen",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Culotta A. and Sorensen J. (2004). Dependency tree kernels for relation extraction. ACL'2004. 423-429. 21-26 July 2004. Barcelona, Spain.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Discovering relations among named entities form large corpora",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Hasegawa",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hasegawa T., Sekine S. and Grishman R. (2004). Discovering relations among named entities form large corpora. ACL'2004. Barcelona, Spain.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A novel use of statistical parsing to extract information from text",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Fox",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Ramshaw",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Weischedel",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Miller S., Fox H., Ramshaw L. and Weischedel R. (2000). A novel use of statistical parsing to ex- tract information from text. ANLP'2000. 226- 233. 29 April -4 May 2000, Seattle, USA",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A study on convolution kernels for shallow semantic parsing",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Moschitti",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "335--342",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Moschitti A. (2004). A study on convolution ker- nels for shallow semantic parsing. ACL'2004:335-342.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Using unlabeled data to improve text classification",
                "authors": [
                    {
                        "first": "K",
                        "middle": [
                            "P"
                        ],
                        "last": "Nigam",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nigam K.P. (2001). Using unlabeled data to im- prove text classification. Technical Report CMU-CS-01-126.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Word Sense Disambiguation Using Label Propagation Based Semi-supervised Learning",
                "authors": [
                    {
                        "first": "Z",
                        "middle": [
                            "Y"
                        ],
                        "last": "Niu",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "H"
                        ],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "L"
                        ],
                        "last": "Tan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Niu Z.Y., Ji D.H., and Tan C.L. (2005). Word Sense Disambiguation Using Label Propagation Based Semi-supervised Learning.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Partially Labeled Classification with Markov Random Walks",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Szummer",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Jaakkola",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "NIPS",
                "volume": "14",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Szummer, M., & Jaakkola, T. (2001). Partially La- beled Classification with Markov Random Walks. NIPS 14.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Document Re-ranking using cluster validation and label propagation",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "P"
                        ],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "H"
                        ],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "D"
                        ],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Nie",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "CIKM'",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yang L.P., Ji D.H., Zhou G.D. and Nie Y. (2006). Document Re-ranking using cluster validation and label propagation. CIKM'2006. 5-11 Nov 2006. Arlington, Virginia, USA.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Kernel methods for relation extraction",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Zelenko",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Aone",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Richardella",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Journal of Machine Learning Research",
                "volume": "3",
                "issue": "",
                "pages": "1083--1106",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zelenko D., Aone C. and Richardella. (2003). Ker- nel methods for relation extraction. Journal of Machine Learning Research. 3(Feb):1083-1106.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Discovering Relations from a Large Raw Corpus Using Tree Similarity-based Clustering",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "M"
                        ],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "D"
                        ],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "L"
                        ],
                        "last": "Tan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Lecture Notes in Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "378--389",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhang M., Su J., Wang D.M., Zhou G.D. and Tan C.L. (2005). Discovering Relations from a Large Raw Corpus Using Tree Similarity-based Clustering, IJCNLP'2005, Lecture Notes in Arti- ficial Intelligence (LNAI 3651). 378-389.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "D"
                        ],
                        "last": "Zhou",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhang M., Zhang J., Su J. and Zhou G.D. (2006). A Composite Kernel to Extract Relations be- tween Entities with both Flat and Structured Features. COLING-ACL-2006: 825-832. Sydney, Australia",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Weakly supervised relation classification for information extraction",
                "authors": [
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhang Z. (2004). Weakly supervised relation clas- sification for information extraction.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Extracting relations with integrated information using kernel methods",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "B"
                        ],
                        "last": "Zhao",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "25--30",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhao S.B. and Grishman R. (2005). Extracting re- lations with integrated information using kernel methods. ACL'2005: 419-426. Univ of Michi- gan-Ann Arbor, USA, 25-30 June 2005.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Exploring various knowledge in relation extraction",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "D"
                        ],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhou G.D., Su J. Zhang J. and Zhang M. (2005). Exploring various knowledge in relation extrac- tion. ACL'2005. 427-434. 25-30 June, Ann Ar- bor, Michgan, USA.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Modeling commonality among related classes in relation extraction",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "D"
                        ],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "COLING-ACL",
                "volume": "",
                "issue": "",
                "pages": "121--128",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhou G.D., Su J. and Zhang M. (2006). Modeling commonality among related classes in relation extraction, COLING-ACL'2006: 121-128. Syd- ney, Australia.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Learning from Labeled and Unlabeled Data with Label Propagation",
                "authors": [
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Ghahramani",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhu, X. and Ghahramani, Z. (2002). Learning from Labeled and Unlabeled Data with Label Propagation. CMU CALD Technical Report. CMU-CALD-02-107.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions",
                "authors": [
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Ghahramani",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lafferty",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhu, X., Ghahramani, Z. and Lafferty, J. (2003). Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. ICML'2003.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "text": "vertex corresponds to an instance, and the edge between any two instances i x and j x is weighted by ij w to measure their similarity. In principle, larger edge weights allow labels to travel through easier. Thus the closer the instances are, the more likely they have similar labels. The algorithm first calculates the weight ij w probability interpretation of the labeling matrix Y .",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "TABREF1": {
                "type_str": "table",
                "num": null,
                "html": null,
                "text": "Comparison of different methods using a state-of-the-art linear kernel on the ACE RDC 2003 corpus (The numbers inside the parentheses indicate the increases in F-measure if we add the ACE RDC 2004 corpus as the unlabeled data)",
                "content": "<table><tr><td>Method</td><td>LP via bootstrapped (weighted) SVs</td><td>LP via bootstrapped (un-weighted) SVs</td><td>LP w/o SVM bootstrapping</td><td>SVM</td><td>(BootProject) SVM Bootstrapping</td></tr><tr><td>5%</td><td>46.5 (+1.4)</td><td>44.5 (+1.7)</td><td>43.1 (+1.0)</td><td>35.4 (-)</td><td>40.6 (+0.9)</td></tr><tr><td>10%</td><td>48.6 (+1.7)</td><td>46.5 (+2.1)</td><td>45.2 (+1.5)</td><td>38.6 (-)</td><td>43.1 (+1.4)</td></tr><tr><td>25%</td><td>51.7 (+1.9)</td><td>50.4 (+2.3)</td><td>49.6 (+1.8)</td><td>43.9 (-)</td><td>47.8 (+1.7)</td></tr><tr><td>50%</td><td>53.6 (+1.8)</td><td>52.6 (+2.2)</td><td>52.1 (+1.7)</td><td>47.2 (-)</td><td>50.5 (+1.6)</td></tr><tr><td>75%</td><td>55.2 (+1.3)</td><td>54.5 (+1.8)</td><td>54.2 (+1.2)</td><td>53.1 (-)</td><td>53.9 (+1.2)</td></tr><tr><td>100%</td><td>56.2 (+1.0)</td><td>55.8 (+1.3)</td><td>55.6 (+0.8)</td><td>55.5 (-)</td><td>55.8 (+0.7)</td></tr><tr><td colspan=\"2\">Table 1: 4.1 Experimental Setting</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">In the ACE RDC 2003 corpus, the training data</td><td/><td/><td/></tr><tr><td colspan=\"3\">consists of 674 annotated text documents (~300k</td><td/><td/><td/></tr><tr><td colspan=\"3\">words) and 9683 instances of relations. During</td><td/><td/><td/></tr><tr><td colspan=\"3\">development, 155 of 674 documents in the training</td><td/><td/><td/></tr><tr><td colspan=\"3\">set are set aside for fine-tuning. The test set is held</td><td/><td/><td/></tr><tr><td colspan=\"3\">out only for final evaluation. It consists of 97</td><td/><td/><td/></tr><tr><td colspan=\"3\">documents (~50k words) and 1386 instances of</td><td/><td/><td/></tr><tr><td colspan=\"3\">relations. The ACE RDC 2003 task defines 5 rela-</td><td/><td/><td/></tr><tr><td colspan=\"3\">tion types and 24 subtypes between 5 entity types,</td><td/><td/><td/></tr><tr><td colspan=\"3\">i.e. person, organization, location, facility and GPE.</td><td/><td/><td/></tr><tr><td colspan=\"3\">All the evaluations are measured on the 24 sub-</td><td/><td/><td/></tr><tr><td colspan=\"3\">types including relation identification and classifi-</td><td/><td/><td/></tr><tr><td>cation.</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">This paper uses the ACE RDC 2003 corpus pro-</td></tr><tr><td/><td/><td/><td colspan=\"3\">vided by LDC for evaluation. This corpus is gath-</td></tr><tr><td/><td/><td/><td colspan=\"3\">ered f rom various newspapers, newswires and</td></tr><tr><td/><td/><td/><td>broadcasts.</td><td/><td/></tr></table>"
            }
        }
    }
}