File size: 54,586 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
{
    "paper_id": "M91-1033",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:15:36.425074Z"
    },
    "title": "UNIVERSITY OF MASSACHUSETTS: DESCRIPTION OF THE CIRCUS SYSTEM AS USED FOR MUC-3",
    "authors": [
        {
            "first": "Wendy",
            "middle": [],
            "last": "Lehnert",
            "suffix": "",
            "affiliation": {},
            "email": "lehnert@cs.umass.edu"
        },
        {
            "first": "Claire",
            "middle": [],
            "last": "Cardie",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "David",
            "middle": [],
            "last": "Fisher",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Ellen",
            "middle": [],
            "last": "Riloff",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Robert",
            "middle": [],
            "last": "Williams",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "BACKGROUND AND MOTIVATION In 1988 Professor Wendy Lehnert completed the initial implementation of a semantically-oriente d sentence analyzer named CIRCUS [1]. The original design for CIRCUS was motivated by two basi c research interests : (1) we wanted to increase the level of syntactic sophistication associated wit h semantically-oriented parsers, and (2) we wanted to integrate traditional symbolic techniques i n natural language processing with connectionist techniques in an effort to exploit the complementar y strengths of these two computational paradigms. Shortly thereafter, two graduate students, Claire Cardie and Ellen Riloff, began to experimen t with CIRCUS as a mechanism for analyzing citation sentences in the scientific literature [2]. The key idea behind this work was to extract a relatively abstract level of information from each sentence , using only a limited vocabulary that was hand-crafted to handle a restricted set of target concepts. We called this mode of language processing selective concept extraction, and the basic style of sentenc e analysis was a type of text skimming. This project provided us with an opportunity to give CIRCUS a workout and determine whether or not the basic design was working as expected .",
    "pdf_parse": {
        "paper_id": "M91-1033",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "BACKGROUND AND MOTIVATION In 1988 Professor Wendy Lehnert completed the initial implementation of a semantically-oriente d sentence analyzer named CIRCUS [1]. The original design for CIRCUS was motivated by two basi c research interests : (1) we wanted to increase the level of syntactic sophistication associated wit h semantically-oriented parsers, and (2) we wanted to integrate traditional symbolic techniques i n natural language processing with connectionist techniques in an effort to exploit the complementar y strengths of these two computational paradigms. Shortly thereafter, two graduate students, Claire Cardie and Ellen Riloff, began to experimen t with CIRCUS as a mechanism for analyzing citation sentences in the scientific literature [2]. The key idea behind this work was to extract a relatively abstract level of information from each sentence , using only a limited vocabulary that was hand-crafted to handle a restricted set of target concepts. We called this mode of language processing selective concept extraction, and the basic style of sentenc e analysis was a type of text skimming. This project provided us with an opportunity to give CIRCUS a workout and determine whether or not the basic design was working as expected .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Although CIRCUS was subject to a number of limitations, the integration of syntax and semantic s appeared to work very nicely . We believed we had constructed a robust text skimmer that wa s semantically oriented but nevertheless able to use syntactic knowledge as needed . Projects associated with the connectionist aspect of CIRCUS took off at about this time and carried us in those directions for a while [3, 4, 5] . When an announcement for MUC-3 reached us in June of 1990, we felt that the MUC-3 evaluation required selective concept extraction capabilities of just the sort we had been developing . We were eager to put CIRCUS to the test .",
                "cite_spans": [
                    {
                        "start": 409,
                        "end": 412,
                        "text": "[3,",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 413,
                        "end": 415,
                        "text": "4,",
                        "ref_id": null
                    },
                    {
                        "start": 416,
                        "end": 418,
                        "text": "5]",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "It was clear to us that MUC-3 would require a much more ambitious and demanding application o f CIRCUS than our earlier work on citation sentences, and we fully expected to learn a great deal fro m the experience . We hoped to capitalize on Cardie and Riloff's previous experience with CIRCUS whil e identifying some new areas for ongoing research in sophisticated text analysis . In September of 1990, Robert Williams joined our MUC-3 effort as a post doc with research experience in case-base d reasoning . Cardie, Riloff, and Williams provided the technical muscle for all of our MUC-3 syste m development and knowledge engineering. Cardie was primarily responsible for CIRCUS and dictionar y design, Riloff developed the rule-based consolidation component, and Williams designed the casebased reasoning consolidation component . Although the division of labor was fairly clean, everyon e worked with CIRCUS dictionary definitions and the preprocessor at various times as needed . In January of 1991, David Fisher joined the project as an undergraduate assistant who designed a n interface for faster dictionary development while assisting with internal testing . Professor Lehnert assumed a leadership role but made no programming contributions to MUC-3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Although CIRCUS was the primary workhorse underlying our MUC-3 effort, it was necessary t o augment CIRCUS with a separate component that would receive CIRCUS output and massage tha t output into the final target template instantiations required for MUC-3 . This phase of our processin g came to be known as consolidation, although it corresponds more generally to what many people woul d call discourse analysis . We will describe both CIRCUS and our consolidation processing with examples from TST1-MUC3-0099 . (Please consult Appendix H for the complete text of TST1-MUC3-0099 .) A flow chart of our complete system is given in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 631,
                        "end": 639,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "SYSTEM COMPONENT S",
                "sec_num": null
            },
            {
                "text": "'n*' _ Analysis ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CBR-Based iS Discourse",
                "sec_num": null
            },
            {
                "text": "To begin, each sentence is given to our preprocessor where a number of domain-specific modifications are made . (1) Dates are analyzed and translated into a canonical form . (2) Words associated with our phrasal lexicon are connected via underscoring. (3) Punctuation marks are translated into atoms mor e agreeable to LISP . For example, the first sentence (SI) reads :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Preprocessin g",
                "sec_num": null
            },
            {
                "text": "After preprocessing, we have:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SI : POLICE HAVE REPORTED THAT TERRORISTS TONIGHT BOMBED THE EMBASSIES OF TH E PRC AND THE SOVIET UNION.",
                "sec_num": null
            },
            {
                "text": "The canonical date was derived from \"tonight\" and the dateline of the article, \"25 OCT 89 .\" Most of our phrasal lexicon is devoted to proper names describing locations and terrorist organizations (83 1 entries) . 25 additional proper names are also recognized, but not from the phrasal lexicon .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SI : (POLICE HAVE REPORTED THAT TERRORISTS ON OCT_25_89 >CO TONIGHT BOMBED TH E EMBASSIES OF THE PRC AND THE SOVIET UNION >PE )",
                "sec_num": null
            },
            {
                "text": "At this point we are ready to hand the sentence to CIRCUS for lexical processing . This is where we search our dictionary and apply morphological analysis in an effort to recognize words in the sentence . Any words that are not recognized receive a default tag reserved for proper nouns in case we need t o make sense out of unknown words later . In order for any semantic analysis to take place, we need to recognize a word that operates as a trigger for a concept node definition . If a sentence contains no concep t node triggers, it is ignored by the semantic component . This is one way that irrelevant texts can be identified : texts that trigger no concept nodes are deemed irrelevant . Words in our dictionary are associated with a syntactic part of speech, a position or positions within a semantic feature hierarchy , possible concept node definitions if the item operates as a concept node trigger, and syntacti c complement predictions . Concept nodes and syntactic complement patterns will be described in the nex t section . An example of a dictionary entry with all four entry types is our definition for \"dead\" as seen i n figure 2 . Figure 2 : The Dictionary Definition for \"DEAD \" When morphological routines are used to strip an inflected or conjugated form back to its root, th e root-form dictionary definition is dynamically modified to reflect the morphological information . Fo r example, the root definition for \"bomb\" will pick up a :VERB-FORM slot with PAST filling it when the lexical item \"bombed\" is encountered .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1150,
                        "end": 1158,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Lexical Analysi s",
                "sec_num": null
            },
            {
                "text": "))))) ) :WORD-SENSES (dead1 ) :CN-DEFS ($LEFT-DEAD$ $FOUND-DEAD$ $FOUND-DEAD-PASS$) )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Analysi s",
                "sec_num": null
            },
            {
                "text": "Words associated with concept nodes activate both syntactic and semantic predictions . In Sl th e verb \"bombed\" activates semantic predictions in the form of a concept node designed to describe a bombing . Each concept node describes a semantic case frame with variable slots that expect to be fille d by specific syntactic constituents . The concept node definition activated by \"bombed\" in Si is given i n Figure 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 408,
                        "end": 416,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Semantic and Syntactic Predictions",
                "sec_num": null
            },
            {
                "text": "We can see from this definition that a case frame with variable slots for the actor and target i s predicted . The actor slot expects to be filled by an organization, the name of a recognized terrorist o r generic terrorist referent, a proper name, or any reference to a person . The target slot expects to be fille d by a physical target . We also expect to locate the actor in the subject of the sentence, and the targe t should appear as either a direct object or the object of a prepositional phrase containing the prepositio n \"in .\" None of these predictions will be activated unless the current sentence is in the active voice .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic and Syntactic Predictions",
                "sec_num": null
            },
            {
                "text": ";; X bombed/dynamited/blew_up ;; the bomb blew up in the building (tstl-0040) -emr ;; (if this causes trouble we can create a new cn for blew_up ) (define-word $BOMBING-3$ (CONCEPT-NODE ' :NAME '$BOMBING- ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic and Syntactic Predictions",
                "sec_num": null
            },
            {
                "text": "Syntactic complement predictions are managed by a separate mechanism that operate s independently of the concept nodes . The syntactic predictions fill syntactic constituent buffers with appropriate sentence fragments that can be used to instantiate various concept node case frames . Syntactic predictions are organized in decision trees using test-action pairs under a stack-based contro l structure [6] . Although syntactic complements are commonly associated with verbs (verb complements), we have found that nouns should be used to trigger syntactic complement predictions with equa l frequency . Indeed, any part of speech can trigger a concept node and associated complement prediction s as needed . As we saw in the previous section, the adjective \"dead\" is associated with syntacti c complement predictions to facilitate noun phrase analysis. Figure 4 shows the syntactic complemen t pattern predicted by \"bombed\" once morphology recognizes the root verb \"bomb . \" (((test (second-verb-or-infinitive?))",
                "cite_spans": [
                    {
                        "start": 402,
                        "end": 405,
                        "text": "[6]",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 852,
                        "end": 860,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "; ; all verbs call this functio n (assign *part-of-speech* 'verb ; ; just to be sure . ; ; don't predict *DO* if conjunction follows the verb, ; ; e .g ., in \"X was damaged and Y was destroyed\" , ; ; Y should NOT be *DO* of \"damaged \" ((test (equal *part-of-speech* 'conjunction))))) ) Figure 4 : The Verb Complement Pattern for \"BOMBED \" Remarkably, Figure 4 displays all the syntactic knowledge CIRCUS needs to know about verbs . Every verb in our dictionary references this same prediction pattern . In particular, this means that we have found no need to distinguish transitive verbs from intransitive verbs, since this one piece of cod e handles both (if the prediction for a direct object fails, the *DO* buffer remains empty) .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 286,
                        "end": 294,
                        "text": "Figure 4",
                        "ref_id": null
                    },
                    {
                        "start": 351,
                        "end": 359,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "Once semantic and syntactic predictions have interacted to produce a set of case frame slot fillers , we then create a frame instantiation which CIRCUS outputs in response to the input sentence . In general, CIRCUS can produce an arbitrary number of case frame instantiations for a single sentence. N o effort is made to integrate these into a larger structure . The concept node instantiation created b y $BOMBING-3$ in response to Sl is given in Figure 5 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 448,
                        "end": 456,
                        "text": "Figure 5",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "Some case frame slots are not predicted by the concept node definition but are inserted into th e frame in a bottom-up fashion. Slots describing time specifications and locations are all filled by a mechanism for bottom-up slot insertion (e .g . the REL-LINK slot in Figure 5 was created in this way) . Although the listing in Figure 5 shows only the head noun \"embassies\" in the target noun group slot, th e full phrase \"embassies of the PRC and the Soviet Union\" has been recognized as a noun phrase and ca n be recovered from this case frame instantiation . The target value \"ws-diplomat-office-or-residence\" i s a semantic feature retrieved from our dictionary definition for \"embassy .\" No additional output i s produced by CIRCUS in response to Si . It is important to understand that CIRCUS uses no sentence grammar, and does not produce a full syntactic analysis for any sentences processed . Syntactic constitutents are utilized only when a concep t node definition asks for them . Our method of syntactic analysis operates locally, and syntacti c predictions are indexed by lexical items . We believe that this approach to syntax is highly advantageous when dictionary coverage is sparse and large sentence fragments can be ignored withou t adverse consequences . This allows us to minimize our dictionaries as well as the amount of processin g needed to handle selective concept extraction from open-ended texts .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 267,
                        "end": 275,
                        "text": "Figure 5",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 327,
                        "end": 335,
                        "text": "Figure 5",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "Some concept nodes are very simple and may contain no variable slots at all . For example, CIRCUS generates two simple frames in response to S2, neither of which contain variable slot fillers .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "Note that the output generated by CIRCUS for S2 as shown in Figure 6 is incomplete . There should be a representation for the damage . This omission is the only CIRCUS failure for TST1-MUC3-0099, an d it results from a noun/verb disambiguation failure . ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 60,
                        "end": 68,
                        "text": "Figure 6",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Figure 3 : The $BOMBING-3$ Concept Node Definition",
                "sec_num": null
            },
            {
                "text": "Special mechanisms are devoted to handling specific syntactic constructs, including appositives an d conjunctions. We will illustrate our handling of conjunctions by examining two instances of \"and\" in S5 : S5: Police said the attacks were carried out almost simultaneously and 1that the bombs broke windows and(2) destroyed the two vehicles .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Problems in Sentence Analysis",
                "sec_num": null
            },
            {
                "text": "We recognize that and(1) is not part of a noun phrase conjunction, but do nothing else with it . A new control kernel begins after \"that\" and reinitializes the state of the parser . and2 is initially recognized as potentially joining two noun phrases --\"windows\" and whatever noun phrase follows . However, when the verb \"destroyed\" appears before any conjoining noun phrase is recognized, the LICK mechanism determines that the conjunction actually joins two verbs and begins a new clause . As a result, the subject of \"broke\" (i .e., \"the bombs\") correctly becomes the subject of \"destroyed\" as well .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Problems in Sentence Analysis",
                "sec_num": null
            },
            {
                "text": "When an entire text has been processed by CIRCUS, the list of the resulting case fram e instantiations is passed to consolidation . A rule base of consolidation heuristics then attempts to merg e associated case frames and create target template instantiations that are consistent with MUC-3 encoding guidelines . It is possible for CIRCUS output to be thrown away at this point if consolidatio n does not see enough information to justify a target template instantiation . If consolidation is not satisfied that the output produced by CIRCUS describes bonafide terrorist incidents, consolidation ca n declare the text irrelevant . A great deal of domain knowledge is needed by consolidation in order to make these determinations . For example, semantic features associated with entities such a s perpetrators, targets, and dates are checked to see which events are consistent with encodin g guidelines. In this way, consolidation operates as a strong filter for output from CIRCUS, allowing us t o concisely implement encoding guidelines independently of our dictionary definitions. A number of discourse-level decisions are made during consolidation, including pronoun resolutio n and reference resolution . Some references are resolved by frame merging rules. For example, CIRCUS output from S1, S2 and S3 is merged during consolidation to produce the target template instantiatio n found in Figure 7 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1395,
                        "end": 1403,
                        "text": "Figure 7",
                        "ref_id": "FIGREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Rule-Based Consolidatio n",
                "sec_num": null
            },
            {
                "text": "The CIRCUS output from Sl triggers a rule called create-bombing which generates a templat e instantiation that eventually becomes the one in Figure 7 . But to arrive at the final template, we mus t first execute three more consolidation rules that combine the preliminary template with output from S 2 and S3 . Pseudo-code for two of these three rules is given in Figure 8 . ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 141,
                        "end": 149,
                        "text": "Figure 7",
                        "ref_id": "FIGREF6"
                    },
                    {
                        "start": 364,
                        "end": 372,
                        "text": "Figure 8",
                        "ref_id": "FIGREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Rule-Based Consolidatio n",
                "sec_num": null
            },
            {
                "text": "IF $weapon structure and the weapon is an explosiv e and a BOMBING or ATTEMPTED-BOMBING template is on the stack in the current famil y and dates are compatible and locations are compatibl e Note also that the location of the incident was merged into this frame from S3 which trigger s another bombing node in response to the verb \"exploded\" as shown in figure 9 . Once again, the top-level REL-LINK for a location is printing out only a portion of the complet e noun phrase that was captured.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "------------------------------------------------------------------------------------------------------------------------------------------------------------------MERGE-WEAPON-BOMBIN G",
                "sec_num": null
            },
            {
                "text": "Although we would say that the referent to \"bombs\" in S2 is effectively resolved durin g consolidation, our methods are not of the type normally associated with linguistic discourse analysis . When consolidation examines these case frames, we are manipulating information on a conceptua l rather than linguistic level . We need to know when two case frame descriptions are providin g information about the same event, but we aren't worried about referents for specific noun phrases per se .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "S3 : (A CAR_BOMB EXPLODED IN_FRONT_OF THE PRC EMBASSY >CO IN THE LIM",
                "sec_num": null
            },
            {
                "text": "We did reasonably well on this story . Three templates of the correct event types were generated and no spurious templates were created by the rule base. Sentences S9 through S13 might hav e generated spurious templates if we didn't pay attention to the dates and victims . Here is how the preprocessor handled S12 :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "S3 : (A CAR_BOMB EXPLODED IN_FRONT_OF THE PRC EMBASSY >CO IN THE LIM",
                "sec_num": null
            },
            {
                "text": "Whenever the preprocessor recognizes a date specification that is \"out of bounds\" (at least tw o months prior to the dateline), it inserts -DEC_31_80 as a flag to indicate that the events associate d with this date are irrelevant . This date specification will then be picked up by any concept nod e instantiations that are triggered \"close\" to the date description . In this case, the event is irrelevan t both because of the date and because of the the victim (murdered militants aren't usually relevant) . Despite the fact that S10 and S13 contain no date descriptions, the case frames generated for thes e sentences are merged with other frames that do carry disqualifying dates, and are therefore handled a t a higher level of consolidation . In the end, the two murders (S11 and S12) are discarded because o f disqualifications on their victim slot fillers, while the bombing (S9) was discarded because of the dat e specification. The injuries described by S10 are correctly merged with output from S9, and therefor e discarded because of the date disqualifier . Likewise, the dynamite from S13 is correctly merged with the murder of the militant, and the dynamite is subsequently discarded along with the rest of tha t template .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "S12 : IN ANOTHER INCIDENT 3 YEARS AGO, A SHINING PATH MILITANT WAS KILLED B Y SOVIET EMBASSY GUARDS INSIDE THE EMBASSY COMPOUND . S12 : (IN ANOTHER INCIDENT ON -DEC_31_80 >CO &&3 YEARS AGO >CO >CO A SHINING_PATH MILITANT WAS KILLED BY SOVIET EMBASSY GUARDS INSIDE TH E EMBASSY COMPOUND >PE )",
                "sec_num": null
            },
            {
                "text": "The CBR component of consolidation is an optional part of our system, designed to increase recal l rates by generating additional templates to augment the output of rule-based consolidation . These extra templates are generated on the basis of correlations between CIRCUS output for a given text, and th e key target templates for similarly indexed texts . The CBR component uses a case base which draw s from 283 texts in the development corpus, and the 100 texts from TST1 for a total of 383 texts . We experimented with a larger case base but found no improvement in performance . The case base contains 254 template type patterns based on CIRCUS output for the 383 texts in the case base .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "Each case in the case base associates a set of concept nodes with a template containing slot fillers from those concept nodes . The concept nodes are generated by CIRCUS when it analyzes the origina l source text . A case has two parts: (1) an incident type, and (2) a set of sentence/slot name patterns . Fo r example, suppose a story describes a bombing such that the perpetrator and the target were mentioned in one sentence, and the target was mentioned again three sentences later . The resulting case would b e generated in response to this text:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "BOMBING 0 : (PERP TARGET) 3 : (TARGET)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "The numerical indices are relative sentence positions . The same pattern could apply no matte r where the two sentences occurred in the text, as long as they were three sentences apart .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "Cases are used to determine when a set of concept nodes all contribute to the same output template . When a new text is analyzed, a probe is used to retrieve cases from the case base . Retrieval probes are new sentence/slot name patterns extracted from the current CIRCUS output . If the sentence/slot name pattern of a probe matches the sentence/slot name pattern of a case in the case base, that case i s retrieved, the probe has succeeded, and no further cases are considered .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "Maximal probes are constructed by grouping CIRCUS output into maximal clusters that yiel d successful probes . In this way, we attempt to identify large groups of consecutive concept nodes that al l contribute to the same output template . Once a maximal probe has been identified, the incident type o f the retrieved case forms the basis for a new CBR-generated output template whose slots are filled b y concept node slot fillers according to appropriate mappings between concept nodes and output templates.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "In the case of TST1-MUC3-0099, case-based consolidation proposes hypothetical template s corresponding to 3 bombings, 2 murders,1 attack, and 1 arson incident . Two of the bombings and the arso n are discarded because they were already generated by rule-based consolidation . The two murders ar e discarded because of victim and target constraints, while the third bombing is discarded because of a date constraint. The only surviving template is the attack incident, which turns out to be spurious . It is interesting to note that for this text, the CBR component regenerates each of the templates created b y rule-based condolidation, and then discards them for the same reasons they were discarded earlier, o r because they were recognized to be redundant against the rule-based output . We have not run any experiments to see how consistently the CBR component duplicates the efforts of rule-based consolidation . While such a study would be very interesting, we should note that the CBR template s are generally more limited in the number of slot fillers present, and would therefore be hard pressed to duplicate the overall performance of rule-based consolidation .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Case-Based Consolidation",
                "sec_num": null
            },
            {
                "text": "As we explained at the beginning of this paper, CIRCUS was originally designed to investigate th e integration of connectionist and symbolic techniques for natural language processing . The origina l connectionist mechanisms in CIRCUS operated to manage bottom-up slot insertion for information foun d in unexpected (i .e . unpredicted) prepositional phrases. Yet when our task orientation is selectiv e concept extraction, the information we are trying to isolate is strongly predicted, and therefore unlikel y to surface in a bottom-up fashion. For MUC-3, we discovered that bottom-up slot insertion was neede d primarily to handle only dates and locations : virtually all other relevant information was managed i n a predictive fashion . Because dates and locations are relatively easy to recognize, any number o f techniques could be successfully employed to handle bottom-up slot insertion for MUC-3 . Although we used the numeric relaxation technique described in [1] to handle dates and locations, we consider thi s mechanism to be excessively powerful for the task at hand, and it could readily be eliminated fo r efficiency reasons in a practical implementation .",
                "cite_spans": [
                    {
                        "start": 970,
                        "end": 973,
                        "text": "[1]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CONCLUSIONS",
                "sec_num": null
            },
            {
                "text": "Although our score reports for TST2 indicate that our system is operating at the leading edge of overall performance for all MUC-3 systems, we nevertheless acknowledge that there are difficultie s with our approach in terms of system development . It would take us a lot of hard work (again) to scal e up to this same level of performance in a completely new domain . New and inexperienced technica l personnel would probably require about 6 months of training before they would be prepared to attempt a technology transfer to a new domain . At that point we would estimate that another 1 .5 person/years of effort would be needed to duplicate our current levels of performance in a new domain . Although these investments are not prohibitive, we believe there is room for improvement in the ways that we ar e engineering our dictionary entries and rule-based consolidation components . We need to investigat e strategies for deducing linguistic regularities from texts and explore available resources that might leverage our syntactic analysis . Similar steps should be taken with respect to semantic analysis although we are much more skeptical about the prospects for sharable resources in this problem area .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CONCLUSIONS",
                "sec_num": null
            },
            {
                "text": "Although we have had very little time to experiment with the CBR consolidation component, th e CBR approach is very exciting in terms of system development possibilities . While the rule-based consolidation component had to be crafted and adjusted by hand, the case base for the CBR componen t was generated automatically and required virtually no knowledge of the domain or CIRCUS per se . In fact, our CBR module can be transported with minor modification to any other MUC-3 system tha t generates case frame meaning representations for sentences . As a discourse analysis component, thi s module is truly generic and could be moved into a new domain with simple adjustments . The labor needed to make the CBR component operational is the labor needed to create a development corpus o f texts with associated target template encodings (assuming a working sentence analyzer is already i n place) . It is much easier to train people to generate target templates for texts than it is to trai n computer programmers in the foundations of artificial intelligence and the design of large rule bases . And the amount of time needed to generate a corpus from scratch is only a fraction of the time needed t o scale up a complicated rule base. So the advantages of CBR components for discourse analysis ar e enticing to say the least. But much work needs to be done before we can determine the functionality o f this technology as a strategy for natural language processing .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CONCLUSIONS",
                "sec_num": null
            },
            {
                "text": "Having survived the MUC-3 experience, we can say that we have learned a lot about CIRCUS, th e complexity of discourse analysis, and the viability of selective concept extraction as a technique fo r sophisticated text analysis . We are encouraged by our success, and we are now optimally positioned t o explore exciting new research areas . Although our particpation in MUC-3 has been a thoroughl y positive experience, we recognize the need to balance intensive development efforts of this type agains t the somewhat riskier explorations of basic research . We would not expect to benefit so dramatically from another intensive performance evaluation if we couldn't take some time to first digest the lesson s 232 we have learned from MUC-3 . Performance evaluations can operate as an effective stimulus fo r research, but only if they are allowed to accompany rather than dominate our principal researc h activities .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CONCLUSIONS",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "CATEGORY OF INCIDENT 5. PERPETRATOR: ID OF INDIV(S )",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "CATEGORY OF INCIDENT 5. PERPETRATOR: ID OF INDIV(S )",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "PHYSICAL TARGET : ID(S)",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "PHYSICAL TARGET : ID(S)",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "HUMAN TARGET: ID(S)",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "HUMAN TARGET: ID(S)",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "TARGET : FOREIGN NATION(S ) 15.INSTRUMENT : TYPE(S)",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "TARGET : FOREIGN NATION(S ) 15.INSTRUMENT : TYPE(S)",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "LOCATION OF INCIDENT 17. EFFECT ON PHYSICAL TARGET(S)",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "LOCATION OF INCIDENT 17. EFFECT ON PHYSICAL TARGET(S)",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "EFFECT ON HUMAN TARGET(S) NO INJURY OR DEATH",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "EFFECT ON HUMAN TARGET(S) NO INJURY OR DEATH: \"-\"",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Symbolic/Subsymbolic Sentence Analysis : Exploiting the Best of Two Worlds",
                "authors": [
                    {
                        "first": "W",
                        "middle": [
                            "G"
                        ],
                        "last": "Lehnert",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "135--164",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lehnert, W.G ., \"Symbolic/Subsymbolic Sentence Analysis : Exploiting the Best of Two Worlds, \" Technical Report No . 88-99, Department of Computer and Information Science, University of Massachusetts. 1988 . Also available in Advances in Connectionist and Neural Computation Theory , Vol . I. (ed : J. Pollack and J . Barnden) . pp . 135-164 . Ablex Publishing, Norwood, New Jersey . 1991 .",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Analyzing Research Papers Using Citation Sentences",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Lehnert",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Riloff",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proceedings of the Twelfth Annual Conference of the Cognitive Science Society",
                "volume": "",
                "issue": "",
                "pages": "511--518",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lehnert, W ., Cardie, C . and Riloff, E ., \"Analyzing Research Papers Using Citation Sentences,\" i n Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, Boston MA. pp . 511 - 518 . 1990 .",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Integration of Semantic and Syntactic Constraints for Structural Noun Phras e Disambiguation",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Wermter",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Proceedings of the Eleventh International Joint Conference on Artificia l Intelligence",
                "volume": "",
                "issue": "",
                "pages": "1486--1491",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wermter, S., \"Integration of Semantic and Syntactic Constraints for Structural Noun Phras e Disambiguation\", in Proceedings of the Eleventh International Joint Conference on Artificia l Intelligence, pp . 1486-1491 . 1989 .",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Learning Semantic Relationships in Compound Nouns with Connectionist Networks",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Wermter",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Proceedings of Eleventh Annual Conference on Cognitive Science",
                "volume": "",
                "issue": "",
                "pages": "964--971",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Wermter, \"Learning Semantic Relationships in Compound Nouns with Connectionist Networks\" , in Proceedings of Eleventh Annual Conference on Cognitive Science , pp . 964-971 . 1989 .",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A Hybrid Symbolic/Connectionist Model for Noun Phras e Understanding",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Wermter",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [
                            "G"
                        ],
                        "last": "Lehnert",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Connection Science",
                "volume": "1",
                "issue": "3",
                "pages": "255--272",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wermter, S . and Lehnert, W .G ., \"A Hybrid Symbolic/Connectionist Model for Noun Phras e Understanding,\" in Connection Science, Vol . 1, No . 3 . pp . 255-272 . 1989 .",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Micro ELP",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Riesbeck",
                        "suffix": ""
                    }
                ],
                "year": 1981,
                "venue": "Inside Computer Understanding",
                "volume": "",
                "issue": "",
                "pages": "354--372",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Riesbeck, C., \"Micro ELP\", in Inside Computer Understanding, (eds: R . Schank and C . Riesbeck) pp . 354-372. Lawrence Erlbaum Associates, 1981 .",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Flow Chart for the MUC-3/CIRCUS System"
            },
            "FIGREF2": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "-CONSTRAINTS '(((class organization *S*)(class terrorist *S* ) (class proper-name *S*)(class human *S*)) ((class phys-target *DO*) (class phys-target *PP*)) ) ' :VARIABLE-SLOTS '( ACTOR (*S* 1 ) TARGET (*DO* 1 *PP* (is-prep? '(in))) ) ' :CONSTANT-SLOTS '( TYPE BOMBING) ) ' :ENABLED-BY '((active))) )"
            },
            "FIGREF3": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "SI : (POLICE HAVE REPORTED THAT TERRORISTS ON OCT_25_89 >CO TONIGHT BOMBED TH E EMBASSIES OF THE PRC AND THE SOVIET UNION >PE ) \u2022 TYPE = BOMBING \u2022 ACTOR = WS-TERRORIST \u2022 noun group = (TERRORISTS ) \u2022 TARGET = WS-DIPLOMAT-OFFICE-OR-RESIDENC E \u2022 noun group = (EMBASSIES ) \u2022 determiners = (THE ) REL-LINK (TIME (OCT_25_89)) ) CIRCUS Output for SI"
            },
            "FIGREF4": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "The 14 sentences in TST1-MUC3-0099 resulted in a total of 27 concept node instantiations describing bombings, weapons, injuries, attacks, destruction , perpetrators, murders, arson, and new event markers . *** S2 : (THE BOMBS CAUSED DAMAGE BUT NO INJURIES >PE ) TYPE = WEAPON INSTR = BOMB (triggered by the noun \"bombs\") \u2022 TYPE = INJURY \u2022 MODE = NEG (triggered by the noun \"injuries\") CIRCUS Output for S2"
            },
            "FIGREF5": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "THE PRC AND THE SOVIETUNION \" \"PRC EMBASSY\" PLURAL _ DIPLOMAT OFFICE OR RESIDENCE : \"EMBASSIES OF THE PRC AND THE SOVIET UNION\" DIPLOMAT OFFICE OR RESIDENCE : \"PRC EMBASSY \" PEOPLES REP OF CHINA : \"EMBASSIES OF THE PRC AND THE SOVIET UNION\" PEOPLES REP OF CHINA : \"PRC EMBASSY \" PERU: LIMA (CITY) : SAN ISIDRO (NEIGHBORHOOD )"
            },
            "FIGREF6": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Our Output Template Representation for Sl-S3"
            },
            "FIGREF7": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Two Merging Rules from Rule-Based Consolidation"
            },
            "FIGREF8": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "(TARGET OBJECT (WS-DIPLOMAT-OFFICE-OR-RESIDENCE) ) >>> REL-LINK (LOC2 OBJECT (WS\u2022-GENERIC-LOC) ) CIRCUS Output for S3"
            }
        }
    }
}