diff --git "a/Full_text_JSON/prefixW/json/W00/W00-0404.json" "b/Full_text_JSON/prefixW/json/W00/W00-0404.json" new file mode 100644--- /dev/null +++ "b/Full_text_JSON/prefixW/json/W00/W00-0404.json" @@ -0,0 +1,2021 @@ +{ + "paper_id": "W00-0404", + "header": { + "generated_with": "S2ORC 1.0.0", + "date_generated": "2023-01-19T05:35:11.669353Z" + }, + "title": "Extracting Key Paragraph based on Topic and Event Detection --Towards Multi-Document Summarization", + "authors": [ + { + "first": "Fumiyo", + "middle": [], + "last": "Fukumoto", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "Yamanashi University", + "location": { + "postCode": "4-3-11, 400-8511", + "settlement": "Takeda, Kofu", + "country": "Japan" + } + }, + "email": "" + }, + { + "first": "Yoshimi", + "middle": [], + "last": "Suzuki", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "Yamanashi University", + "location": { + "postCode": "4-3-11, 400-8511", + "settlement": "Takeda, Kofu", + "country": "Japan" + } + }, + "email": "" + } + ], + "year": "", + "venue": null, + "identifiers": {}, + "abstract": "This paper proposes a method for extracting key paragraph for multi-document summarization based on distinction between a topic and a~ event. A topic emd an event are identified using a simple criterion called domain dependency of words. The method was tested on the TDT1 corpus which has been developed by the TDT Pilot Study and the result can be regarded as promising the idea of domain dependency of words effectively employed.", + "pdf_parse": { + "paper_id": "W00-0404", + "_pdf_hash": "", + "abstract": [ + { + "text": "This paper proposes a method for extracting key paragraph for multi-document summarization based on distinction between a topic and a~ event. A topic emd an event are identified using a simple criterion called domain dependency of words. The method was tested on the TDT1 corpus which has been developed by the TDT Pilot Study and the result can be regarded as promising the idea of domain dependency of words effectively employed.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Abstract", + "sec_num": null + } + ], + "body_text": [ + { + "text": "As the volume of olfline documents has drastically increased, summarization techniques have become very importaalt in IR and NLP studies. Most of the summarization work has focused on a single document. Tiffs paper focuses on multi-document summarization: broadcast news documents about the same topic. One of the major problems in the multidocument summarization task is how to identify differences and similza'ities across documents. This can be interpreted as a question of how to make a clear distinction between an e~ent mM a topic in docu= meats. Here, an event is the subject of a document itself, i.e. a writer wants to express, in other words, notions of who, what, where, when. why and how in a document. On the other hand, a topic in this paper is some unique thing that happens at some specific time and place, and the unavoidable consequences.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1" + }, + { + "text": "It'becomes background among documents. For example, in the documents of :Kobe Japan quake', the event includes early reports of damage, location and nature of quake, rescue efforts, consequences of the quake, a~ld on-site reports, while the topic is Kobe Japaa~ quake. The well-known past experience from IR ~ that notions of who, what, where, when, why and how may not make a great contribution to the topic detection and tracking task (Allan and Papka, 1998) causes this fact, i.e. a topic and an event are different from each other 1 .", + "cite_spans": [ + { + "start": 437, + "end": 460, + "text": "(Allan and Papka, 1998)", + "ref_id": "BIBREF2" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1" + }, + { + "text": "1 Some topic words can also be an event. Fbr instance: in the document shown in Figure 1 : 'Japan: and =quake' are topic words and also event words in the document. However, we regarded these words as a topic, i.e. not be an event.", + "cite_spans": [], + "ref_spans": [ + { + "start": 80, + "end": 88, + "text": "Figure 1", + "ref_id": "FIGREF1" + } + ], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1" + }, + { + "text": "In this paper: we propose a. method fi)r extracting key paragraph for multi-document smnmarization based on distinction between a topic and an event. We use a silnple criterion called domain dependency of words as a solution and present how the i.dea of domain dependency of words can be utilized effectively to identify a topic and an event: and thus allow multi-document summarization.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1" + }, + { + "text": "The basic idea of our approach is that whether a word appeared in a document is a topic (an event) or not, depends on the domain to which the document belongs. Let us take a look at the following document from the TDT1 corpus. Figure I is the document whose topic is 'Kobe Japan quake', and the subject of the document (event words) is 'Two Americans known dead in Japan quake'. Underlined words denote a topic, and the words marked with '[ ]' are events. '1,,,7' of Figure 1 is paragraph id. Like Lulm's technique of keyword extraction, our method assumes that an event associated with a document appears throughout parm graphs (Luhn, 1958) , but a topic does not. This is because an event is the subject of a document itself. while a topic is an event, along with all directly related events. In Figure 1 , event words 'Americans' and 'U.S.', for instance, appears across paragraphs, while a topic word, for example, 'Kobe' appears only the third paragraph. Let us consider further a broad coverage domain which consists of a small number of sanaple news documents about the same topic, 'Kobe Japan quake'. (1-3) Kobe quake leaves questions about medical system 1. The earthquake that devastated Kobe in January raised serious questions about the efficiency of Japan's emergency medical system, a government report released on Tuesday said. 2. 'The earthquake exposed many i~ues in terms of quantity, quality, promptness and efficiency of Japan's medical care in time of disaster,' the report on-'ff'h-~alth and welfare said. Underlined words in Figure 2 and 3 show the topic of these documents. In these two documents, :Kobe' which is a topic appears in eveD\" document, while 'Americans' and 'U.S.' which are events of the document shown in Figure 1 , does not appear. Our technique for making the distinction between a topic and an event explicitly exploits this feature of the domain dependency of words: how strongly a word features a given set of data. The rest of the paper is organized as follows. The next section provides domain dependency of words which is used to identify a topic and an event for broadcast news documents. We then present a method for extracting topic and event words: and describe a paragraph-based summarization algorithm using the result of topic and event extraction. Fi-nally~ we report some experiments using the TDT1 corpus which has been developed by the TDT (Topic Detection and Tracking) Pilot Study (Allan and Carbonell, 1998) with a discussion of evaluation.", + "cite_spans": [ + { + "start": 629, + "end": 641, + "text": "(Luhn, 1958)", + "ref_id": null + }, + { + "start": 2441, + "end": 2468, + "text": "(Allan and Carbonell, 1998)", + "ref_id": "BIBREF1" + } + ], + "ref_spans": [ + { + "start": 227, + "end": 235, + "text": "Figure I", + "ref_id": null + }, + { + "start": 798, + "end": 806, + "text": "Figure 1", + "ref_id": "FIGREF1" + }, + { + "start": 1548, + "end": 1556, + "text": "Figure 2", + "ref_id": "FIGREF3" + }, + { + "start": 1744, + "end": 1752, + "text": "Figure 1", + "ref_id": "FIGREF1" + } + ], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1" + }, + { + "text": "The domain dependency of words that how strongly a word features a given set of data (documents) contributes to event extraction, as we previously reported (Fukumoto et al.: 1997) . In the study, we hypothesi~d that the articles from the Wall Street Journal corpus can be structured by three levels, i.e. Domain, Article and Paragraph. It'a word is nil event in a given article, it satisfies the two conditions: (1) The dispersion value of the word in the Paragraph level is smaller than that of the Art.iele, since the .word appears throughout paragr~q~hs in the Paragraph level rather than articles in the Article level.", + "cite_spans": [ + { + "start": 156, + "end": 179, + "text": "(Fukumoto et al.: 1997)", + "ref_id": "BIBREF8" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": "(2) The dispersion value of the word in the Article is smaller than that of the Domain, as the word appears across articles rather than domains. However, ~here are two problems to adapt it to multl-document summarization task. The first is that the method extracts only events in the document. Because the goal of the study is to summarize a single document, and thus there is no answer to the question of how to identi~' differences and similarities across documents. The second is that the performance of the method greatly depends on the structure of a given data itself. Like the Wall Street Journal corpus, (i) if a given data caal be structured by three levels, Paragraph, Article and Domain, each of which consists of several paragraphs, articles and domains, respectively, aaad (ii) if Domain consists of different subject domains, such as 'aerospace', 'environment' and 'stock market', the method can be done with satisfactoD' accuracy. However, there is no guarantee to make such an appropriate structure from a given set of documents in the multi-document summarization task.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": "The purpose of this paper is to define domain dependency of words for a number of sample documents about the same topic, and thus for multidocument summarization task. Figure 4 illustrates the structure of broadcast news documents which have been developed by the TDT (Topic Detection and Tracking) Pilot Study (Allan and Carbonell, 1998) . It consists of two levels, Paragraph and Document. In Document level, there is a small number of sample news documents about the same topic. These documents are arranged in chronological order such as, '(l-l) Quake collapses buildings in central ,Japan ( Figure 2 )', '(1-2) Two Americans known dead in Japan quake ( Figure 1 )' and '(1-3) gobe quake leaves questions about medical system (Figure 3) :0 Given the structure shown in Figure 4 , how can we identi~\" every word in document (1-2) with an event, a topic or a general word? Our method assumes that aal event associated with a document appears across paragraphs, but a topic word does not. Then, we use domain dependency of words to extract event and topic words in document (1-2). Domain dependency of words is a measure showing how greatly each word features a given set of data. In Figure 4 .. let 'C)', 'A' and 'x' denote a topicl an event and a general word in document (1-2), respectively. We recall the example shown in Figure 1 . 'A', for instance, 'U.S.' appears across paragraphs. However, in the Document level, :A' frequently appears in document, (1-2) itself. On the basis of this example, we hypothesize that if word i is an event, it\"satisfies the following condition:", + "cite_spans": [ + { + "start": 311, + "end": 338, + "text": "(Allan and Carbonell, 1998)", + "ref_id": "BIBREF1" + } + ], + "ref_spans": [ + { + "start": 168, + "end": 176, + "text": "Figure 4", + "ref_id": "FIGREF5" + }, + { + "start": 596, + "end": 604, + "text": "Figure 2", + "ref_id": "FIGREF3" + }, + { + "start": 658, + "end": 666, + "text": "Figure 1", + "ref_id": "FIGREF1" + }, + { + "start": 730, + "end": 740, + "text": "(Figure 3)", + "ref_id": "FIGREF4" + }, + { + "start": 773, + "end": 781, + "text": "Figure 4", + "ref_id": "FIGREF5" + }, + { + "start": 1185, + "end": 1193, + "text": "Figure 4", + "ref_id": "FIGREF5" + }, + { + "start": 1327, + "end": 1335, + "text": "Figure 1", + "ref_id": "FIGREF1" + } + ], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": "x x :h0 0 ; 5 A i=2 0.3} o X 0 0 X ~ oo*.. 0 ' - ..J i=m oo i Paragraphleve~ ! ' 0/' , X", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": "[1] Word i greatly depends on a particular document in the Document level rather than a particular paragraph in the Paragraph.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": "Next, we turn to identi~\" the remains (words) wit.h a topic, or a general word. In Figure 5 ; a topic of documents (1-1) ~ (1-3), for instance, :Kobe' aPpears in a particular paragraph in each level of Paragraphl, Paragraph2 and Paragraph3. Here, (1-1), (1-2) and (1-3) corresponds to Paragraph1, Paragraph2 and Paragraph3, respectively. On the other hand, in Document level, a topic frequently appears acros.~ documents. Then: we hypothesize that if word i is a", + "cite_spans": [], + "ref_spans": [ + { + "start": 83, + "end": 91, + "text": "Figure 5", + "ref_id": null + } + ], + "eq_spans": [], + "section": "Domain Dependency of Words", + "sec_num": "2" + }, + { + "text": ". topic, it satisfies the following condition:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "(H)", + "sec_num": "33" + }, + { + "text": "o':.i :~1 Paragraph 1: C. level ~! C' xi x j=l \u2022 o\u00b0-.\u00b0 i j=2 ~ ..... j=n ic. ParagraphZi O \" level !! x i (1-2) p-3) \u00b0 x x [ 0 t i z L i :i=2 i=3 m~ --i X i ,.,.o i=rn i0i!, i O:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "(H)", + "sec_num": "33" + }, + { + "text": "[2] Word i greatly depends on a particular paragraph in each Paragraph level rather than a particular document in Document.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "(H)", + "sec_num": "33" + }, + { + "text": "We hypothesized that the domain dependency of words is a key clue to make a distinction between a topic and an event. This can be broken down into two observations: (i) whether a word appears across paragraphs (documents), (it) whether or not a word appears frequently. We represented the former by using dispersion value, and the latter by deviation value. Topic and event words are extracted by using these values. The first step to extract topic and event words is to assign weight to the individual word in a document. We applied TF*IDF to each level of the Document and Paragraph, i.e. Paragraphl, Paragraph2 and Paragraph3.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "N Wdit = TFdit * log Ndt (1)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "Wdit in formula (1) is TF*IDF of term t in the i-th document. In a similar way, Wpit denotes TF*IDF of the term t in the i-th paragraph. TFdit in (1) denotes term frequency of t in the i-th document. N is the number of documents and Ndt is the number of do(:uments where t occurs. The second step is to calculate domain dependency of words. We defined it by using formula (2) and 3.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "DispOt = /I/E'~=l(I4;dit -mean')2 (2) \u00a5 Tn", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "De vdi, = (Wditmeant) ,10+50", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "(3)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Topic and Event Extraction", + "sec_num": "3" + }, + { + "text": "Formula (2) is dispersion value of term t in the level of Document which consists of m documents, and denotes how frequently t appears across documents.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "DispDt", + "sec_num": null + }, + { + "text": "In a similar way, DispPt denotes dispersion of term t in the level of Paragraph. Formula 3is the deviation value of t in the i-th document and denotes how frequently it appears in a particular document, the i-th document. Devpit is deviation of term t in the i-th paragraph. In 2and 3, meant is the mean of the total TF*IDF values of term t in the level of Document. The last step is to extract a topic and an ever~t using fonmfla (2) and (3). We recall that if t is an event, it satisfies [1] described in section 2. This is shown by using formula (4) mad (5).", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "DispDt", + "sec_num": null + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": "DispPt < DispDt (4) for all Pi E di Devpjt < Devdit", + "eq_num": "(5)" + } + ], + "section": "DispDt", + "sec_num": null + }, + { + "text": "Formula (4) shows that t frequently appears across paragraphs rather than documents. In formula 5, di is the i-th document and consists of the number of n paragraphs (see Figure 4 ). Pi is an element of di. (5) shows that t frequently appears in the i-th document di rather than paragraphs pj ( 1 < j < n). On the other hand: if t satisfies formula (6) and (7), then propose t as a topic.", + "cite_spans": [], + "ref_spans": [ + { + "start": 171, + "end": 179, + "text": "Figure 4", + "ref_id": "FIGREF5" + } + ], + "eq_spans": [], + "section": "DispDt", + "sec_num": null + }, + { + "text": "DispPt > DispDt (6) for all dl E D, Pit exists such that Devpjt >_ Devdlt 7In formula (7), D consists of the number of rn docaments (see Figure 5 ). (7) denotes that t frequently appears in the particular paragraph pj rather than the document di which includes pj.", + "cite_spans": [], + "ref_spans": [ + { + "start": 137, + "end": 145, + "text": "Figure 5", + "ref_id": null + } + ], + "eq_spans": [], + "section": "DispDt", + "sec_num": null + }, + { + "text": "The summarization task in this paper is paragraphbased extraction . Basically, paragraphs which include not only event words but also topic words are considered to be significant paragraphs. The basic algorithm works as follows:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "1. For each document: extract topic and event words.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "2. Determine the paragraph weights for all paragraphs in the documents:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "(a) Compute the sum of topic weights over the total number of topic words for each paragraph.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "(b) Compute the sum of event weights over the total number of event words for each paragraph. A topic and an event weights are calculated by using Devdlt in formula (3). Here, t is a topic or an evcnt and i is the i-th document in the documents.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "(c) Compute the sum of (a) and (b) for each paragraph.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "3. Sort the paragraphs t~ccording to their weights and extract the N highest weighted paragrai~hs in documents in order to yield summarization of the documents.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "4. When their weights are the same, Compute the sum of all the topic and event word weights. Select a paragraph whose weight is higher than the others.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction", + "sec_num": "4" + }, + { + "text": "Evaluation of extracting key paragraph based on multi-document is difficult. First, we have not found an existing collection of summaries of multiple documents. Second, the maamal effort needed to judge system output is far more extensive than for single document summarization. Consequently, we focused on the TDT1 corpus. This is because (i) events have been defined to support the TDT study effort, (ii) it was completely annotated with respect to these events (Allan and Carbonell, 1997) . Therefore, we do not need the manual effort to collect documents which discuss about the target event. We report the results of three experiments. The first experiment, Event Extraction, is concerned with event extraction technique, ha the second experiment, Tracking Task, we applied the extracted topics to tracking task (Allan and Carbonell, 1998) . The third experiment: Key Paragraph Extraction is conducted to evaluate how the extracted topic and event words can be used effectively to extract key paragraph.", + "cite_spans": [ + { + "start": 464, + "end": 491, + "text": "(Allan and Carbonell, 1997)", + "ref_id": "BIBREF0" + }, + { + "start": 817, + "end": 844, + "text": "(Allan and Carbonell, 1998)", + "ref_id": "BIBREF1" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Experiments", + "sec_num": "5" + }, + { + "text": "The TDT1 corpus comprises a set of documents (.15,863) that includes both newswire (Reuters) 7..965 and a manual transcription of the broadcast news speech (CNN) 7,898 documents. A set of 25 target events were defined 2 All documents were tagged by the tagger (Brill, 1992 ", + "cite_spans": [ + { + "start": 260, + "end": 272, + "text": "(Brill, 1992", + "ref_id": "BIBREF6" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Data", + "sec_num": null + }, + { + "text": "We collected 300 documents from the TDT1 corpus, each of which is mmolated with respect to one of 25 events.' The result is shown in Table 1 . In Table 1 , 'Event type' illustrates the target events defined by the TDT Pilot Study. 'Doe' denotes the number of documents. 'Rec' (Recall) is the immber of correct events divided by the total mnnber of events which are selected by a human, and 'Prec' (Precision) stands for the number of correctevents divided by the number of events which are selected by our method. The denominator 'Rec' is made by a hmnan judge. 'Accuracy' in Table 1 is the total average ratio.", + "cite_spans": [], + "ref_spans": [ + { + "start": 133, + "end": 140, + "text": "Table 1", + "ref_id": "TABREF10" + }, + { + "start": 146, + "end": 153, + "text": "Table 1", + "ref_id": "TABREF10" + }, + { + "start": 576, + "end": 583, + "text": "Table 1", + "ref_id": "TABREF10" + } + ], + "eq_spans": [], + "section": "Event Extraction", + "sec_num": null + }, + { + "text": "In Table 1 , recall and precision values range from 55.0/47.0 to 83.3/84.2, the average being 71.0/72.2. The worst result of recall and precision was when event type was 'Serbs violate Bihac' (55.0/59.3). We currently hypothesize that this drop of accuracy is due to the fhct that some documents are against our assumption of an event. Examining the documents whose event type is 'Serbs violate Bihac', 3 ( one from CNN and two from Reuters).out of 16 documents has discussed the same event, i.e. 'Bosnian Muslim enclave hit by heavy shelling'. As a result, the event appears across these three documents\u2022 Future research will shed nmre light on that.", + "cite_spans": [], + "ref_spans": [ + { + "start": 3, + "end": 10, + "text": "Table 1", + "ref_id": "TABREF10" + } + ], + "eq_spans": [], + "section": "Event Extraction", + "sec_num": null + }, + { + "text": "Tracking task in the TDT project is starting from a few sample documents and finding all subsequent documents that discuss the same event (Allan and Carbonell, 1998) , (Carbonell et al., 1999) . The corpus is divided into two parts: training set and test set. Each of the documents is flagged as to whether it discusses the target event, and these flags ('YES', :'NO') axe the only information used for training the .system to correctly classiC\" the target event. We applied the extracted topic to the tracking task under \u2022 these conditions. The basic algorithm used in the experiment is as follows: Let $1: --', S,, be all the other training documents (where m is the number of training documents which does not belong to the target event) and Sx be a test docmnent which should be classified as to whether or not it discusses the target event. 81, \"\" \", Sm mid Sz are represented \" by term vectors as follows: Sx is judged to be a document that discusses the target event.", + "cite_spans": [ + { + "start": 138, + "end": 165, + "text": "(Allan and Carbonell, 1998)", + "ref_id": "BIBREF1" + }, + { + "start": 168, + "end": 192, + "text": "(Carbonell et al., 1999)", + "ref_id": "BIBREF7" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Tracking Task", + "sec_num": "5.3" + }, + { + "text": "Ill ti2 s.t\u2022 li.i = { f(t,A if t,~ (1 < i < m", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Tracking Task", + "sec_num": "5.3" + }, + { + "text": "We used the standard TDT evaluation measure Table 2 illustrates the result.", + "cite_spans": [], + "ref_spans": [ + { + "start": 44, + "end": 51, + "text": "Table 2", + "ref_id": "TABREF5" + } + ], + "eq_spans": [], + "section": "Tracking Task", + "sec_num": "5.3" + }, + { + "text": "3. Table 3 . 'Event' denotes event words in the first document in chronological order from A~ ---4, and i the title of the document is 'Emergency Work Continues After Earthquake in Japan'. Table 3 clearly demonstrates that the criterion, domain dependency of-''words effectively employed. . 'Miss' means Miss rate, which is the ratio of the doounents that were judged as YES but were not evahmted as YES for the run in question.", + "cite_spans": [], + "ref_spans": [ + { + "start": 3, + "end": 10, + "text": "Table 3", + "ref_id": "TABREF7" + }, + { + "start": 189, + "end": 196, + "text": "Table 3", + "ref_id": "TABREF7" + } + ], + "eq_spans": [], + "section": "Tracking Task", + "sec_num": "5.3" + }, + { + "text": "'F/A' shows false alarm rate and 'FI' is a measure that balances recall and precision. 'Rec' denotes the ratio of the documents judged YES that were also evaluated as YES, and 'Prec' is the percent of the documents that were evaluated as YES which correspond to documents actually judged as YES. Table 2 shows that more training data helps the performance, as the best result was when we used :Yt = 16. Table 3 illustrates the extracted topic and event words in a sample document. The topic is 'Kobe Japan quake' and the number of positive training documents is 4. 'Devpzt', 'Devd]t', 'DispPt' and 'DispDt' denote values calculated by using formula", + "cite_spans": [], + "ref_spans": [ + { + "start": 296, + "end": 303, + "text": "Table 2", + "ref_id": "TABREF5" + }, + { + "start": 403, + "end": 410, + "text": "Table 3", + "ref_id": "TABREF7" + } + ], + "eq_spans": [], + "section": "Event type", + "sec_num": null + }, + { + "text": "(2) and (3). , Overall, the curves also show that more training helps tile performance, while there is no significant B difference among -'Yt = 2, 4 and 8.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Event type", + "sec_num": null + }, + { + "text": "We used 4 different sets as a test data. Each set con-\u2022 sists of 2, 4.. 8 and 16 documents. For each set, we I I", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Key Paragraph Extraction roll", + "sec_num": "5.4" + }, + { + "text": "We collected 300 docmnents from the TDT1 corpus, each of which is annotated with respect to one of 25 events.' The result is shown in Table 1 . In Table 1 .. 'Event type' illustrates the target events defined by the TDT Pilot Study. ~Doc' denotes the number of documents. 'Rec' (Recall) is the nmnbet of correct events divided by the total number of events which are selected by a humaa, and :Pree ~ (Precision) stands for the number of correct-events divided by the number of events which are selected by our method. The denominator 'Rec: is made by a human judge. 'Accuracy' in Table 1 is the total average ratio.", + "cite_spans": [], + "ref_spans": [ + { + "start": 134, + "end": 141, + "text": "Table 1", + "ref_id": "TABREF10" + }, + { + "start": 147, + "end": 154, + "text": "Table 1", + "ref_id": "TABREF10" + }, + { + "start": 580, + "end": 587, + "text": "Table 1", + "ref_id": "TABREF10" + } + ], + "eq_spans": [], + "section": "Event Extraction", + "sec_num": "5.2" + }, + { + "text": "In Table 1 , recall and precision values range, from 55.0/47.0 to 83.3/84.2, the average being 71.0/72.2. The worst result of recall and precision was when event type was 'Serbs violate Bihac' (55.0/59.3). We currently hypothesize that this drop of accuracy is due to the fact that some documents are against our assumption of an event. Examining the ctocuments whose event type is 'Serbs violate Bihac', 3 ( one from CNN and two from Reuters) out of 16 documents has discussed the same evefit, i.e. 'Bosnian Muslim enclave hit by heavy shelling'. As a result, the event appears across these three documents. Future research will shed more light on that.", + "cite_spans": [], + "ref_spans": [ + { + "start": 3, + "end": 10, + "text": "Table 1", + "ref_id": "TABREF10" + } + ], + "eq_spans": [], + "section": "Event Extraction", + "sec_num": "5.2" + }, + { + "text": "Tracking task in the TDT project is starting from a few sample documents and finding all subsequent documents that discuss the same event (Allan and Carbonell, 1998) , (Carbonell et al., 1999) . The corpus is divided into two parts: training set and test ~et. Each of the documents is flagged as to whether it discusses the target event, and these flags ('YES', 'NO') are the only information used tbr training the system to correctly classiC\" the target event. We applied the extracted topic to the tracking task under these conditions. The basic algorithm used in the \u2022 experiment is as follows: ", + "cite_spans": [ + { + "start": 138, + "end": 165, + "text": "(Allan and Carbonell, 1998)", + "ref_id": "BIBREF1" + }, + { + "start": 168, + "end": 192, + "text": "(Carbonell et al., 1999)", + "ref_id": "BIBREF7" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Tracking Task", + "sec_num": "5.3" + }, + { + "text": "Let $1, ---, S,,, be all the other training documents (where m is the number of training documents which does not belong to the target event) and Sx be a test document which should be classified as to whether or not it discusses the target event. $1, \"--, Sm and Sx are represented \" by term vectors as follows:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Represent other training and test documents as term vectors", + "sec_num": "2." + }, + { + "text": "~ = '\" { s.t. llj = f(t~j) ift 0 (1 < i iII", + "html": null, + "num": null + }, + "TABREF3": { + "text": "). %Ve used nouns in the documents. h t t p://morph.ldc.upenn.edu/TDT", + "type_str": "table", + "content": "
I5.2I
II
II
II
II
Ii
II
II
iI
II
iI
II
/i
II
Ii
I!
II
i
I
", + "html": null, + "num": null + }, + "TABREF5": { + "text": "The results of tracking task", + "type_str": "table", + "content": "
%Miss%F/AF1 %Rec %Prec
132.50.160.6867.570.0
223.70~06 0.8076.387.8
423.10.05 0.8176.990.1
812.00.08 0,8788.091.4
\u2022 1613.70.06 0.8986.393.6
Avg21.00.08 0.7679.086.6
In Table 2, 'Nt' denotes the number of positive train-
ing documents where A~ takes on values 1, 2, 4, 8
.3 http://www.nist.gov/speech/tdt98.htm
", + "html": null, + "num": null + }, + "TABREF7": { + "text": "", + "type_str": "table", + "content": "
: Topic and event words in :Kobe Japan
quake'
Topic wordDevpltDevdzt
earthquake Japan Kobe fire53,5 69,8 56,6 57.050.0 50.0 50.0 46.412.3 13.3 8.6 2.310.3 9.8 6.4 1.5.ol .(m .o6 o.1 o2. o.5 1 g Eigure 6: DET curve for a sample tracking runs s lo '2o 4o $o 8o Fatse Atarm p'rotm~Jity (in %)90II \u2022
Event wordDevpltDevdztDispP tDispDt
emergency50.074.70.91.5
area40.650.00.61.0
worker50.066.10.41.0
rescue43.350.02.33.4
", + "html": null, + "num": null + }, + "TABREF9": { + "text": "", + "type_str": "table", + "content": "
: The results of tracking task
Nt %Miss %F/AF1 %Rec %Prec
132.50.16 0.6867.570:0
223.70.06 0.8076.387.8
423.10.05 0.8176.990.1
812.00.08 0.8788.091.4
1613.70.06 0.8986.393.6
\"Avg21.00.08 0.7679.086.6
In Table 2, 'Nt' denotes the number of positive train-
ing documents where A~ takes on values 1, 2, 4, 8
z http://www.nist.gov/speech/tdt98.htrn
", + "html": null, + "num": null + }, + "TABREF10": { + "text": "The results of event words extraction", + "type_str": "table", + "content": "
I
I
", + "html": null, + "num": null + }, + "TABREF11": { + "text": "The results of Key Paragraph Extraction", + "type_str": "table", + "content": "
Accuracy
%10%20Total
Paa'a Correct(%)ParaCorrect(%)ParaCorrect(%)
25844(75.8)11791(77.7)175135(77.1)
410780(74.7)214160(74.7)321240(74.7)
8202138(68.3)404278(68.8)606416(68.6)
16281175(62~)563361(64.1)844536(63.5)
Total648437(67.4) 1,298890(68.5) 1,9461,327(68.1)
", + "html": null, + "num": null + } + } + } +} \ No newline at end of file