File size: 45,153 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
{
    "paper_id": "A00-1023",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:12:19.233891Z"
    },
    "title": "A Question Answering System Supported by Information Extraction*",
    "authors": [
        {
            "first": "Rohini",
            "middle": [],
            "last": "Srihari",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Cymfony Inc",
                "location": {
                    "addrLine": "5500 Main Street Williamsville",
                    "postCode": "14221",
                    "region": "NY"
                }
            },
            "email": "rohini@cymfony.com"
        },
        {
            "first": "Wei",
            "middle": [],
            "last": "Li",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Cymfony Inc",
                "location": {
                    "addrLine": "5500 Main Street Williamsville",
                    "postCode": "NY14221"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper discusses an information extraction (IE) system, Textract, in natural language (NL) question answering (QA) and examines the role of IE in QA application. It shows: (i) Named Entity tagging is an important component for QA, (ii) an NL shallow parser provides a structural basis for questions, and (iii) high-level domain independent IE can result in a QA breakthrough.",
    "pdf_parse": {
        "paper_id": "A00-1023",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper discusses an information extraction (IE) system, Textract, in natural language (NL) question answering (QA) and examines the role of IE in QA application. It shows: (i) Named Entity tagging is an important component for QA, (ii) an NL shallow parser provides a structural basis for questions, and (iii) high-level domain independent IE can result in a QA breakthrough.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "With the explosion of information in Internet, Natural language QA is recognized as a capability with great potential. Traditionally, QA has attracted many AI researchers, but most QA systems developed are toy systems or games confined to lab and a very restricted domain. More recently, Text Retrieval Conference (TREC-8) designed a QA track to stimulate the research for real world application.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Due to little linguistic support from text analysis, conventional IR systems or search engines do not really perform the task of information retrieval; they in fact aim at only document retrieval. The following quote from the QA Track Specifications (www.research.att.com/ -singhal/qa-track-spec.txt) in the TREC community illustrates this point.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Current information retrieval systems allow us to locate documents that might contain the pertinent information, but most of them leave it to the user to extract the useful information from a ranked list. This leaves the (often unwilling) user with a relatively large amount of text to consume. There is an urgent need for tools that would reduce the amount of text one might have to read in order to obtain the desired information. This track aims at doing exactly that for a special (and popular) class of information seeking behavior: QUESTION ANSWERING. People have questions and they need answers, not documents. Automatic question answering will definitely be a significant advance in the state-of-art information retrieval technology. Kupiec (1993) presented a QA system MURAX using an on-line encyclopedia. This system used the technology of robust shallow parsing but suffered from the lack of basic information extraction support. In fact, the most siginifcant IE advance, namely the NE (Named Entity) technology, occured after Kupiec (1993) , thanks to the MUC program (MUC-7 1998). High-level IE technology beyond NE has not been in the stage of possible application until recently.",
                "cite_spans": [
                    {
                        "start": 485,
                        "end": 498,
                        "text": "(and popular)",
                        "ref_id": null
                    },
                    {
                        "start": 742,
                        "end": 755,
                        "text": "Kupiec (1993)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 1038,
                        "end": 1051,
                        "text": "Kupiec (1993)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "AskJeeves launched a QA portal (www.askjeeves.com). It is equipped with a fairly sophisticated natural language question parser, but it does not provide direct answers to the asked questions. Instead, it directs the user to the relevant web pages, just as the traditional search engine does. In this sense, AskJeeves has only done half of the job for QA.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We believe that QA is an ideal test bed for demonstrating the power of IE. There is a natural co-operation between IE and IR; we regard QA as one major intelligence which IE can offer IR.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "An important question then is, what type of IE can support IR in QA and how well does it support it? This forms the major topic of this paper. We structure the remaining part of the paper as follows. In Section 1, we first give an overview of the underlying IE technology which our organization has been developing. Section 2 discusses the QA system. Section 3 describes the limitation of the current system. Finally, in Section 4, we propose a more sophisticated QA system supported by three levels of IE.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The last decade has seen great advance and interest in the area of IE. In the US, the DARPA sponsored Tipster Text Program [Grishman 1997 ] and the Message Understanding Conferences (MUC) [MUC-7 1998 ] have been the driving force for developing this technology. In fact, the MUC specifications for various IE tasks have become de facto standards in the IE research community. It is therefore necessary to present our IE effort in the context of the MUC program. MUC divides IE into distinct tasks, namely, NE (Named Entity), TE (Template Element), TR (Template Relation), CO (Co-reference), and ST (Scenario Templates) [Chinchor & Marsh 1998 ]. Our proposal for three levels of IE is modelled after the MUC standards using MUC-style representation. However, we have modified the MUC IE task definitions in order to make them more useful and more practical. More precisely, we propose a hierarchical, 3-level architecture for developing a kernel IE system which is domain-independent throughout.",
                "cite_spans": [
                    {
                        "start": 123,
                        "end": 137,
                        "text": "[Grishman 1997",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 188,
                        "end": 199,
                        "text": "[MUC-7 1998",
                        "ref_id": null
                    },
                    {
                        "start": 619,
                        "end": 641,
                        "text": "[Chinchor & Marsh 1998",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "The core of this system is a state-of-the-art NE tagger ], named Textract 1.0. The Textract NE tagger has achieved speed and accuracy comparable to that of the few deployed NE systems, such as NetOwl [Krupka & Hausman 1998 ] and Nymble [Bikel et al 1997] .",
                "cite_spans": [
                    {
                        "start": 200,
                        "end": 222,
                        "text": "[Krupka & Hausman 1998",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 236,
                        "end": 254,
                        "text": "[Bikel et al 1997]",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "It is to be noted that in our definition of NE, we significantly expanded the type of information to be extracted. In addition to all the MUC defined NE types (person, organization, location, time, date, money and percent), the following types/sub-types of information are also identified by the TextractNE module: These new sub-types provide a better foundation for defining multiple relationships between the identified entities and for supporting question answering functionality. For example, the key to a question processor is to identify the asking point (who, what, when, where, etc.) . In many cases, the asking point corresponds to an NE beyond the MUC definition, e.g. the how+adjective questions: how long (duration or length), how far (length), how often (frequency), how old (age), etc.",
                "cite_spans": [
                    {
                        "start": 561,
                        "end": 591,
                        "text": "(who, what, when, where, etc.)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "\u2022 duration,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Level-2 IE, or CE (Correlated Entity), is concerned with extracting pre-defined multiple relationships between the entities. Consider the person entity as an example; the TextractCE prototype is capable of extracting the key relationships such as age, gender, affiliation, position, birthtime, birth__place, spouse, parents, children, where.from, address, phone, fax, email, descriptors. As seen, the information in the CE represents a mini-CV or profile of the entity. In general, the CE template integrates and greatly enriches the information contained in MUC TE and TR.",
                "cite_spans": [
                    {
                        "start": 252,
                        "end": 387,
                        "text": "gender, affiliation, position, birthtime, birth__place, spouse, parents, children, where.from, address, phone, fax, email, descriptors.",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "The final goal of our IE effort is to further extract open-ended general events (GE, or level 3 IE) for information like who did what (to whom) when (or how often) and where. By general events, we refer to argument structures centering around verb notions plus the associated information of time/frequency and location. We show an example of our defined GE extracted from the text below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Julian Hill, a research chemist whose accidental discovery of a tough, taffylike compound revolutionized everyday life after it proved its worth in warfare and courtship, died on Sunday in Hockessin, Del.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "[1] <GE_TEMPLATE> := PREDICATE: die ARGUMENTI:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Julian Hill TIME:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Sunday LOCATION:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Hockessin, Del Figure 1 is the overall system architecture for the IE system Textract that our organization has been developing. The core of the system consists of three kernel IE modules and six linguistic modules. The multi-level linguistic modules serve as an underlying support system for different levels of IE. The IE results are stored in a database which is the basis for IE-related applications like QA, BR (Browsing, threading and visualization) and AS (Automatic Summarization). The approach to IE taken here, consists of a unique blend of machine learning and FST (finite state transducer) rule-based system [Roche & Schabes 1997] . By combining machine learning with an FST rule-based system, we are able to exploit the best of both paradigms while overcoming their respective weaknesses , Li & Srihari 2000 , where (LOCATION), how far (LENGTH). Therefore, the NE tagger has been proven to be very helpful.",
                "cite_spans": [
                    {
                        "start": 620,
                        "end": 642,
                        "text": "[Roche & Schabes 1997]",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 801,
                        "end": 820,
                        "text": ", Li & Srihari 2000",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 15,
                        "end": 23,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "I I I I I I F L-- ----~ .... . L ------. ------| ....",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of Textract IE",
                "sec_num": "1"
            },
            {
                "text": "Of course, the NE of the targeted type is only necessary but not complete in answering such questions because NE by nature only extracts isolated individual entities from the text. Nevertheless, using even crude methods like \"the nearest NE to the queried key words\" or \"the NE and its related key words within the same line (or same paragraph, etc.)\", in most cases, the QA system was able to extract text portions which contained answers in the top five list. Figure 2 illustrates the system design of TextractQA Prototype.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 462,
                        "end": 470,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "There are two components for the QA prototype: Question Processor and Text Processor. The Text Matcher module links the two processing results and tries to find answers to the processed question. Matching is based on keywords, plus the NE type and their common location within a same sentence. The following is an example where the asking point does not correspond to any type of NE in our definition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "[3] Why did David Koresh ask the FBI for a word processor ?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "The system then maps it to the following question template :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "[4] asking_point:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "key_word:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "REASON { ask, David, Koresh, FBI, word, processor }",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "The question processor scans the question to search for question words (wh-words) and maps them into corresponding NE types/sub-types or pre-defined notions like REASON.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "We adopt two sets of pattern matching rules for this purpose: (i) structure based pattern matching rules; (ii) simple key word based pattern matching rules (regarded as default rules).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "It is fairly easy to exhaust the second set of rules as interrogative question words/phrases form a closed set. In comparison, the development of the first set of rules are continuously being fine-tuned and expanded. This strategy of using two set of rules leads to the robustness of the question processor.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "The first set of rules are based on shallow parsing results of the questions, using Cymfony FST based Shallow Parser. This parser identifies basic syntactic constructions like BaseNP (Basic Noun Phrase), BasePP (Basic Prepositional Phrase) and VG (Verb Group).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "The following is a sample of the first set of rules: As seen, shallow parsing helps us to capture a variety of natural language question expressions. However, there are cases where some simple key word based pattern matching would be enough to capture the asking point. That is our second set of rules. These rules are used when the first set of rules has failed to produce results. The following is a sample of such rules:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "In the stage of question expansion, the template in [4] [asking, David,Koresh,FBI, word, processor} The last item in the asking._point list attempts to find an infinitive by checking the word to followed by a verb (with the part-of-speech tag VB). As we know, infinitive verb phrases are often used in English to explain a reason for some action.",
                "cite_spans": [
                    {
                        "start": 52,
                        "end": 55,
                        "text": "[4]",
                        "ref_id": null
                    },
                    {
                        "start": 56,
                        "end": 99,
                        "text": "[asking, David,Koresh,FBI, word, processor}",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Apptication Modutes",
                "sec_num": null
            },
            {
                "text": "On the text processing side, we first send the question directly to a search engine in order to narrow down the document pool to the first n, say 200, documents for IE processing. Currently, this includes tokenization, POS tagging and NE tagging. Future plans include several levels of parsing as well; these are required to support CE and GE extraction. It should be noted that all these operations are extremely robust and fast, features necessary for large volume text indexing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Text Processing",
                "sec_num": "2.2"
            },
            {
                "text": "Parsing is accomplished through cascaded finite state transducer grammars.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Text Processing",
                "sec_num": "2.2"
            },
            {
                "text": "The Text Matcher attempts to match the question template with the processed documents for both the asking point and the key words. There is a preliminary ranking standard built-in the matcher in order to find the most probable answers. The primary rank is a count of how many unique keywords are contained within a sentence. The secondary ranking is based on the order that the keywords appear in the sentence compared to their order in the question. The third ranking is based on whether there is an exact match or a variant match for the key verb. In the TREC-8 QA track competition, Cymfony QA accuracy was 66.0%. Considering we have only used NE technology to support QA in this run, 66.0% is a very encouraging result.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Text Matching",
                "sec_num": "2.3"
            },
            {
                "text": "The first limitation comes from the types of questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limitation",
                "sec_num": "3"
            },
            {
                "text": "Currently only wh-questions are handled although it is planned that yes-no questions will be handled once we introduce CE and GE templates to support QA. Among the wh-questions, the why-question and how-question t are more challenging because the asking point cannot be simply mapped to the NE types/sub-types.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limitation",
                "sec_num": "3"
            },
            {
                "text": "The second limitation is from the nature of the questions. Questions like Where can l find the homepage for Oscar winners or Where can I find info on Shakespeare's works might be answerable easily by a system based on a well-maintained data base of home pages. Since our system is based on the processing of the underlying documents, no correct answer can be provided if there is no such an answer (explicitly expressed in English) in the processed documents. In TREC-8 QA, this is not a problem since every question is guaranteed to have at least one answer in the given document pool. However, in the real world scenario such as a QA portal, it is conceived that the IE results based on the processing of the documents should be complemented by other knowledge sources such as e-copy of yellow pages or other manually maintained and updated data bases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limitation",
                "sec_num": "3"
            },
            {
                "text": "The third limitation is the lack of linguistic processing such as sentence-level parsing and cross-sentential co-reference (CO). This problem will be gradually solved when high-level IE technology is introduced into the system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limitation",
                "sec_num": "3"
            },
            {
                "text": "A new QA architecture is under development; it will exploit all levels of the IE system, including CE and GE. The first issue is how much CE can contribute to a better support of QA. It is found that there are some frequently seen questions which can be better answered once the CE information is provided. These questions are of two types: (i) what/who questions about an NE; (ii) relationship questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "Questions The next issue is the relationships between GE and QA. It is our belief that the GE technology will result in a breakthrough for QA.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "In order to extract GE templates, the text goes through a series of linguistic processing as shown in Figure 1 . It should be noted that the question processing is designed to go through parallel processes and share the same NLP resources until the point of matching and ranking.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 102,
                        "end": 110,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "The merging of question templates and GE templates in Template Matcher are fairly straightforward. As they both undergo the same NLP processing, the resulting semantic templates are of the same form. Both question templates and GE templates correspond to fairly standard/predictable patterns (the PREDICATE value is open-ended, but the structure remains stable). More precisely, a user can ask questions on general events themselves (did what) and/or on the participants of the event (who, whom, what) and/or the time, frequency and place of events (when, how often, where). This addresses 2 An alpha version of TextractQA supported by both NE and CE has been implemented and is being tested. by far the most types of general questions of a potential user.",
                "cite_spans": [
                    {
                        "start": 484,
                        "end": 501,
                        "text": "(who, whom, what)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "For example, if a user is interested in company acquisition events, he can ask questions like: Which companies ware acquired by Microsoft in 1999? Which companies did Microsoft acquire in 1999? Our system will then parse these questions into the templates as shown below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "[31] <Q_TEMPLATE> := PREDICATE: acquire ARGUMENT1: Microsoft ARGUMENT2: WHAT(COMPANY) TIME: 1999",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "If the user wants to know when some acquisition happened, he can ask: When was Netscape acquired?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "Our system will then translate it into the pattern below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "[32] <QTEMPLATE> := PREDICATE: acquire ARGUMENT1: WHO ARGUMENT2: Netscape TIME: WHEN Note that WHO, WHAT, WHEN above are variable to be instantiated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            },
            {
                "text": "Such question templates serve as search constraints to filter the events in our extracted GE template database. Because the question templates and the extracted GE template share the same structure, a simple merging operation would suffice. Nevertheless, there are two important questions to be answered: (i) what if a different verb with the same meaning is used in the question from the one used in the processed text? (ii) what if the question asks about something beyond the GE (or CE) information? These are issues that we are currently researching.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future Work: Multi-level IE Supported QA",
                "sec_num": "4"
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Nymble: a High-Performance Learning Name-finder",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "M"
                        ],
                        "last": "Bikel",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "194--201",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bikel D.M. et al. (1997) Nymble: a High-Performance Learning Name-finder. \"Proceedings of the Fifth Conference on Applied Natural Language Processing\", Morgan Kaufmann Publishers, pp. 194-201",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "MUC-7 Information Extraction Task Definition (version 5.1)",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Chinchor",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Marsh",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of MUC-7",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chinchor N. and Marsh E. (1998) MUC-7 Information Extraction Task Definition (version 5.1), \"Proceedings of MUC-7\".",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "TIPSTER Architecture Design Document Version 2.3",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grishman R. (1997) TIPSTER Architecture Design Document Version 2.3. Technical report, DARPA",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "IsoQuest Inc.: Description of the NetOwl (TM) Extractor System as Used for MUC-7",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "R"
                        ],
                        "last": "Krupka",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Hausman",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of MUC-7",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Krupka G.R. and Hausman K. (1998) IsoQuest Inc.: Description of the NetOwl (TM) Extractor System as Used for MUC-7, \"Proceedings of MUC-7\".",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "MURAX: A Robust Linguistic Approach For Question Answering Using An On-Line Encyclopaedia",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kupiec",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proceedings of SIGIR-93 93",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kupiec J. (1993) MURAX: A Robust Linguistic Approach For Question Answering Using An On-Line Encyclopaedia, \"Proceedings of SIGIR-93 93\" Pittsburgh, Penna.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Flexible Information Extraction Learning Algorithm, Final Technical Report",
                "authors": [
                    {
                        "first": "W &",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Srihari",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the Seventh Message Understanding Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Li, W & Srihari, R. 2000. Flexible Information Extraction Learning Algorithm, Final Technical Report, Air Force Research Laboratory, Rome Research Site, New York MUC-7 (1998) Proceedings of the Seventh Message Understanding Conference (MUC-7), published on the website _http://www.muc.saic.com/",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A Domain Independent Event Extraction Toolkit",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Roche",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Srihari",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Finite-State Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Roche E. and Schabes Y. (1997) Finite-State Language Processing, MIT Press, Cambridge, MA Srihari R. (1998) A Domain Independent Event Extraction Toolkit, AFRL-IF-RS-TR-1998-152 Final Technical Report, Air Force Research Laboratory, Rome Research Site, New York",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "uris": null,
                "text": "Textract IE System Architecture",
                "type_str": "figure"
            },
            "TABREF4": {
                "html": null,
                "num": null,
                "content": "<table><tr><td>Process Question</td><td/></tr><tr><td colspan=\"2\">Shallow parse question</td></tr><tr><td colspan=\"2\">Determine Asking Point</td></tr><tr><td colspan=\"2\">Question expansion (using word lists)</td></tr><tr><td>Process Documents</td><td/></tr><tr><td colspan=\"2\">Tokenization, POS tagging, NE Indexing</td></tr><tr><td colspan=\"2\">Shallow Parsing (not yet utilized)</td></tr><tr><td>Text Matcher</td><td/></tr><tr><td colspan=\"2\">Intersect search engine results with NE</td></tr><tr><td>rank answers</td><td/></tr><tr><td colspan=\"2\">2.1 Question Processing</td></tr><tr><td colspan=\"2\">The Question Processing results are a list of</td></tr><tr><td colspan=\"2\">keywords plus the information for asking point.</td></tr><tr><td colspan=\"2\">For example, the question:</td></tr><tr><td/><td>The output before</td></tr><tr><td colspan=\"2\">question expansion is a simple 2-feature template</td></tr><tr><td>as shown below:</td><td/></tr><tr><td colspan=\"2\">[3] asking_point: PERSON</td></tr><tr><td>key_word:</td><td>{ won, 1998, Nobel,</td></tr><tr><td/><td>Peace, Prize }</td></tr><tr><td/><td>Question Prc~:essor</td></tr><tr><td/><td>i : :eXt i .... i</td></tr><tr><td/><td>Figure 2: Textract/QA 1.0 Prototype Architecture</td></tr><tr><td/><td>The general algorithm for question</td></tr><tr><td/><td>answering is as follows:</td></tr></table>",
                "type_str": "table",
                "text": "P r~_~ ............ ?~ i i ~ ..............................."
            },
            "TABREF7": {
                "html": null,
                "num": null,
                "content": "<table><tr><td colspan=\"3\">Q: Who is Julian Hill?</td></tr><tr><td>A: name:</td><td/><td colspan=\"2\">Julian Werner Hill</td></tr><tr><td>type:</td><td/><td>PERSON</td></tr><tr><td>age:</td><td/><td>91</td></tr><tr><td>gender:</td><td/><td>MALE</td></tr><tr><td>position:</td><td/><td>research chemist</td></tr><tr><td colspan=\"2\">affiliation:</td><td>Du Pont Co.</td></tr><tr><td colspan=\"2\">education:</td><td colspan=\"2\">Washington University;</td></tr><tr><td/><td/><td>MIT</td></tr><tr><td colspan=\"3\">Q: What is Du Pont?</td></tr><tr><td colspan=\"3\">A: name: Du Pont Co,</td></tr><tr><td colspan=\"3\">type: COMPANY</td></tr><tr><td colspan=\"4\">staff: Julian Hill; Wallace Carothers.</td></tr><tr><td>Questions</td><td colspan=\"2\">specifically about</td><td>a CE</td></tr><tr><td colspan=\"4\">relationship include: For which company did</td></tr><tr><td colspan=\"4\">Julian Hill work? (affiliation relationship) Who</td></tr><tr><td colspan=\"4\">are employees of Du Pont Co.? (staff</td></tr><tr><td colspan=\"4\">relationship) What does Julian Hill do?</td></tr><tr><td colspan=\"3\">(position/profession relationship)</td><td>Which</td></tr><tr><td colspan=\"4\">university did Julian Hill graduate from?</td></tr><tr><td colspan=\"3\">(education relationship), etc. 2</td></tr></table>",
                "type_str": "table",
                "text": "of the following format require CE templates as best answers: who/what is NE? For example, Who is Julian Hill? Who is Bill Clinton? What is Du Pont? What is Cymfony? To answer these questions, the system can simply 1 For example, How did one make a chocolate cake? How+Adjective questions (e.g. how long, how big, how old, etc.) are handled fairly well.retrieve the corresponding CE template to provide an \"assembled\" answer, as shown below."
            }
        }
    }
}