Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
53 kB
{
"paper_id": "M92-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:02.155095Z"
},
"title": "CRL/NMSU and Brandeis : Description of the MucBruce System as Used for MUC-4",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Cowie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Scott",
"middle": [],
"last": "Waterma",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Through their involvement in the Tipster project the Computing Research Laboratory at New Mexic o State University and the Computer Science Department at Brandeis University are developing a method fo r identifying articles of interest and extracting and storing specific kinds of information from large volumes o f Japanese and English texts. We intend that the method be general and extensible. The techniques involve d are not explicitly tied to these two languages nor to a particular subject area. Development for Tipster ha s been going on since September, 1992. The system we have used for the MUC-4 tests has only implemented some of the features we pla n to include in our final Tipster system. It relies intensively on statistics and on context-free text markin g to generate templates. Some more detailed parsing has been added for a limited lexicon, but lack of fulle r coverage places an inherent limit on its performance. Most of the information produced in our MUC template s is arrived at by probing the text which surrounds `significant' words for the template type being generated , in order to find appropriately tagged fillers for the template fields .",
"pdf_parse": {
"paper_id": "M92-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Through their involvement in the Tipster project the Computing Research Laboratory at New Mexic o State University and the Computer Science Department at Brandeis University are developing a method fo r identifying articles of interest and extracting and storing specific kinds of information from large volumes o f Japanese and English texts. We intend that the method be general and extensible. The techniques involve d are not explicitly tied to these two languages nor to a particular subject area. Development for Tipster ha s been going on since September, 1992. The system we have used for the MUC-4 tests has only implemented some of the features we pla n to include in our final Tipster system. It relies intensively on statistics and on context-free text markin g to generate templates. Some more detailed parsing has been added for a limited lexicon, but lack of fulle r coverage places an inherent limit on its performance. Most of the information produced in our MUC template s is arrived at by probing the text which surrounds `significant' words for the template type being generated , in order to find appropriately tagged fillers for the template fields .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The overall system architecture is shown in Figure 1 . Three independent processes operate on an inpu t text . One, the Text Tagger, marks a variety of strings with semantic information . The other two, the Relevant Template Filter and the Relevant Paragraph Filter, perform word frequency analysis to determin e whether a text should be allowed to generate templates for particular incident types and which paragraph s are specifically related to each incident type . These predictions are used by the central process in th e system, the Template Constructor, which uses a variety of heuristics to extract template information fro m the tagged text . A skeleton template structure is then passed to the final process, the Template Formatter, which performs some consistency checking, creates cross references and attempts to expand any names foun d in the template to the longest form in which they occur in the text . Each of the above processes is described in more detail below .",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "OVERVIEW OF THE TEMPLATE FILLING PROCES S",
"sec_num": null
},
{
"text": "We have developed a procedure for detecting document types in any language . The system requires training texts for the types of documents to be classified and is developed on a sound statistical basis usin g probabilistic models of word occurrence [Guthrie and Walker 1991] . This may operate on letter grams o f appropriate size or on actual words of the language being targeted and develops optimal detection algorithm s from automatically generated \"word\" lists . The system depends on the availability of appropriate training texts . So far the method has been applied to English, discriminating between Tipster and MUC texts, an d to Japanese between Tipster texts and translations of ACM proceedings . In both cases the classification scheme developed was correct 99% of the time .",
"cite_spans": [
{
"start": 249,
"end": 274,
"text": "[Guthrie and Walker 1991]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevancy Filters",
"sec_num": null
},
{
"text": "The method has now been extended to the identification of relevant paragraphs and relevant templat e types for the MUC documents . This is a more complex problem due to the non-homogeneous nature of th e texts and the difficulty of deriving training sets of text . Each process uses two sets of words, one whic h occurs with high probability in the texts of interest, and the other which occurs in the `non-interesting ' texts . Due to the complexity of separating relevant from non-relevant information for the MUC texts w e actually use three filters, two trained on sets of non-relevant and relevant paragraphs and one trained o n sets of relevant and non-relevant texts . The lists of relevant and non-relevant paragraphs were derived using the templates of the 1300 text test corpus . Any paragraph which contributed two or more string fills to a particular template was used as part of the relevant training set ; paragraphs contributing only one string fill were regarded as of dubious accuracy and were not placed in either set and all other paragraphs wer e considered as non-relevant . Word lists were derived automatically by finding those words in the relevan t training set which occurred within a threshold of most frequently occurring words in the relevant paragraphs and not in the non-relevant paragraphs, and vice versa to obtain a set of non-relevant words .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevancy Filters",
"sec_num": null
},
{
"text": "The relevant template marker consists of two processes, the first trained on a set of texts consistin g of paragraphs from the MUC corpus which produced two or more string fills against text consisting o f paragraphs which generated no string fills .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevancy Filters",
"sec_num": null
},
{
"text": "These allow us to determine, based on word counts taken at paragraph level, whether the whole tex t should be checked for specific template types . The second stage is activated if any single paragraph in the text is found to be `relevant' . This stage is trained on the set of texts which generated a particular templat e type against texts which produced no templates . There are separate relevant and non-relevant lists of word s used to determine each template type .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevancy Filters",
"sec_num": null
},
{
"text": "The result is a vector represented as a Prolog fact which determines whether the texts will be allowed t o generate templates of a particular type . Thus : ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevancy Filters",
"sec_num": null
},
{
"text": "The relevant paragraph filter is the final stage and uses word lists which were derived from relevant an d non-relevant paragraphs for each template type .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "slot(4, ['NO', 'ARSON', 'NO', 'ATTACK', 'YES', 'BOMBING' , 'NO', 'KIDNAPPING', 'NO', 'ROBBERY', 'NO', 'DUMMY']) .",
"sec_num": null
},
{
"text": "Once again this operates at the paragraph level and produces a list of paragraph numbers for eac h template type . These paragraph lists are only used if the relevant template filter has also predicted a template of that type . This stage produces a vector of relevant paragraphs . Thus : The two stages can be thought of as first distinguishing relevant texts for a particular template typ e from among all texts and second, given a relevant text, to distinguish between the relevant and non-relevan t paragraphs within that text for the template type .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "slot(4, ['NO', 'ARSON', 'NO', 'ATTACK', 'YES', 'BOMBING' , 'NO', 'KIDNAPPING', 'NO', 'ROBBERY', 'NO', 'DUMMY']) .",
"sec_num": null
},
{
"text": "Partial word lists for relevant and non-relevant texts are given in Tables 1 and 2 . The full lists contain 124 and 117 words respectively . Partial relevant word lists for BOMBING at the text level (relevant template ) and the paragraph level are given in Tables 3 and 4 . The full lists contain 176 and 51 words respectively .",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 82,
"text": "Tables 1 and 2",
"ref_id": "TABREF2"
},
{
"start": 257,
"end": 271,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "slot(4, ['NO', 'ARSON', 'NO', 'ATTACK', 'YES', 'BOMBING' , 'NO', 'KIDNAPPING', 'NO', 'ROBBERY', 'NO', 'DUMMY']) .",
"sec_num": null
},
{
"text": "A key question for the Tipster and MUC tasks is the correct identification of place names, company an d organization names, and the names of individuals . We now have available to us several sources of geographic , company and personal name information . In addition the templates provided for MUC also supplied nam e information . These have been incorporated in a set of tagging files which provide lexical information as a pre-processing stage for every text .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Tagging",
"sec_num": null
},
{
"text": "The details of the Text Tagger are shown in Figure 2 , which is a screen dump of an interface which allow s examination of the operation of each stage in the filter . The text window on the left shows the state of a text after the group dates process has converted dates to standard form and on the right after the temporary tags placed to identify date constituents have been removed . Each stage, apart from the last, marks the text with tags in the form :",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Semantic Tagging",
"sec_num": null
},
{
"text": "Thus for example a date takes the form : In general each stage in the pipeline is only allowed to modify text which is not already marked, althoug h an examination of already marked text is allowed . Several stages also place temporary markers in the text For processing by the template constructor the final convert facts stage changes each sentence into a Prolog fact, containing sentence and paragraph numbers and a list of structures holding the marked item s Thus : . sen (3,3,[name(\"GARCIA ALVARADO\",null),',', num(\"86\",num(86)),',' , cs(\"WAS\",closed(was,[pastv] )), gls(\"KILLED\",action(killed,'ATTACK')) , cs (\"WHEN\",closed(when,[conj,pron] )), cs (\"A\",closed(a,[determiner] All the programs in the Tagger are written in `C' or Lex . We describe three of these components in mor e detail .",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 570,
"text": "(3,3,[name(\"GARCIA ALVARADO\",null),',', num(\"86\",num(86)),',' , cs(\"WAS\",closed(was,[pastv]",
"ref_id": null
},
{
"start": 618,
"end": 649,
"text": "(\"WHEN\",closed(when,[conj,pron]",
"ref_id": null
},
{
"start": 657,
"end": 683,
"text": "(\"A\",closed(a,[determiner]",
"ref_id": null
}
],
"eq_spans": [],
"section": "<\\TYPE> ACTUAL TEXT STRING {SEMANTIC INFORMATION} <\\ENDTYPE>",
"sec_num": null
},
{
"text": "This program uses a large list of known strings which is held alphabetically . For each word in the text a binary search is performed on the list . When a match is found it will be with the longest string beginnin g with the word, subsequent words in the text are compared with the matched string . If the complete string i s matched then this portion of text is marked with the information associated with the string . If a complet e match is not achieved the word is checked against the previous item in the list, which may also match the word, and the process is repeated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Known Item s",
"sec_num": null
},
{
"text": "The strings and information in the file are derived from a variety of sources . The place name informatio n provided for MUC, organization, target and weapon names derived from the MUC templates and furthe r lists of human occupations and titles derived from Longman's .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Known Item s",
"sec_num": null
},
{
"text": "The proper name filter uses a variety of methods to successfully identify a large majority of the huma n names found in a MUC text . It uses two data resources ; a complete word list of all the Longman Dictionar y headwords and a list of English and Spanish first and last names . In addition it uses the hidden Marko v Model algorithm described by BBN in MUC-3 to identify Spanish words . The first stage marks words no t in Longman's, Spanish words and known first and last names . The second stage decides whether a group of these items is indeed a name . Any group containing a Spanish word or a known name is recognized , unknown words on their own must be preceded by a title of some kind (identified by the Known Items step) . Once an unknown item is identified as a name, however, it is added temporarily to the list of first and las t names, so if it occurs in isolation later in the text it will be recognized correctly . A further complication to the problem of name recognition was found in several names which contained text which had already bee n identified as a place name . In this case the proper name marker over-rides the previous marking and marks the entire section of text as a human name .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proper Names",
"sec_num": null
},
{
"text": "The date marker uses a wide variety of patterns which have been identified in the MUC and Tipster texts a s referring to time . Each date is converted to a standard form and the identified text marked . Relative time expressions are always converted with reference to the headline date on the text . This assumption appears to be valid in the vast majority of cases we have examined .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Date Part s",
"sec_num": null
},
{
"text": "The template constructor uses the tagged text and the list of relevant paragraphs for each template typ e to generate skeleton templates which are produced as a list of triples, SLOT NUMBER, SET FILL, STRIN G FILL . For example :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "[ [0 , 'TST2-MUC4-0048 ' ,null] ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "A sequence of paragraphs is assumed to generate a new template . The sentences in these paragraphs are examined for a sentence containing a key verb for the template type . Sentences before this sentence ar e held in reverse order and sentences after in normal order . Each sentence is stripped of any prefatory claus e terminated by \"that\" (e .g . GOVERNMENT OFFICIALS REPORTED TODAY THAT) . The remainder of the sentence is reordered into lists containing texts marked with specific semantic types . These correspond to the appropriate fillers for the main sections of the template . The sentence is then marked as active or passive . A search is then made in the current sentence and either the previous or the succeeding ones fo r items satisfying the appropriate conditions to fill a template slot . Thus for an active sentence the perpetrator will be sought in the head of the sentence and then, if not found, in previous sentences . This provides a crude form of reference resolution as pronouns are not marked with any specific semantic information . The target is checked for in the tail of the sentence and then in subsequent sentences . This process is repeated for all the main fields of the template . It relies heavily on the fact that our text locating techniques are accurate . If no appropriate action word is found the template creation process is abandoned . The process is also abandoned if some of the template filling criteria are not satisfied (eg if the human target is a militar y officer) . The template construction program is written in Prolog and was compiled to run stand-alone usin g Quintus Prolog .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "We obviously need to add more precise syntax and semantics at the sentence level and to provide a structure which allows the inter-relationship of a group of sentences to be captured . The advantage of the method we are using at the moment is that it is robust and can be used as a fall-back whenever the mor e precise methods fail . A limited amount of semantic parsing was implemented before the final MUC-4 test . This over-rode the robust method whenever an appropriate parse was found . Due to the limited number of lexical entries we were able to generate before the test, it was not possible to accurately assess the impac t of the more precise grammar .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "Below are given sample entries of the lexical structures used in the MUC-4 tests. The transitive ver b murder and gerundive nominal killing illustrate the current state of the integration of lexical semanti c information (seen in the qualia field) with corpus-related information derived from tuning (seen in th e cospec field) [Pustejovsky 1991] . Cospecifacaiion is a semantic tagging of what collocational patterns th e lexical item may enter into . The sem field specifies directly how to map the qualia values into the appropriat e slots in the MUC templates . Parsing rules which allow indeterminate gaps are used to match the cospecification against the ke y sentences found . A parser-generator uses the cospec fields of the GLS's to construct the parsing rules, wit h type constraints obtained from the corresponding qualia fields . Certain operators within the rules (such as np() and \"*\") allow varying degrees of unspecified material to be considered in the constituents of the parse . The parsing rules can in this way be seen as specifying complex regular expressions . Because of thi s looseness, the parser will not break due to unknown items or intervening material .",
"cite_spans": [
{
"start": 328,
"end": 346,
"text": "[Pustejovsky 1991]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "These parsing rules are individually pre-compiled into compact Prolog code (each a small expressio n matching machine) before being included into the template constructor . The term-unification machinery of Prolog automatically relates the syntactic constituents of the parse with the type constraints from th e qualia and also with the arguments of the template semantics, avoiding the need for complex type matchin g and argument matching procedures .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "Performance is degraded by the current partial implementation of the cospec field in the lexical structure definition . The statistical-based corpus-tuning program for the lexical structures was not included for th e MUC-4 test runs, but is on development-schedule for inclusion in the Tipster test run later this summer . The cospec for a lexical item ideally encodes corpus-based usage information for each semantic aspect of the word (e .g . its qualia, event type, and argument structure) . This is a statistically-encoded structure o f all admissible semantic collocations associated with the lexical item .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "The initial seeding of the LS's is being done from lexical entries in the Longman Dictionary of Contemporary English [Proctor et al 1978] , largely using tools described in [Wilks et al 1990] . These are the n automatically adapted to the format of generative lexical structures . It is these lexical structures which ar e then statistically tuned against the corpus, following the methods outlined in [Pustejovsky 1992 ] and [Anic k and Pustejovsky 1990] . Semantic features for a lexical item which are missing or only partially specifie d from dictionary seeding are, where possible, induced from a semantic model of the corpus . ",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "[Proctor et al 1978]",
"ref_id": "BIBREF3"
},
{
"start": 173,
"end": 191,
"text": "[Wilks et al 1990]",
"ref_id": "BIBREF6"
},
{
"start": 402,
"end": 419,
"text": "[Pustejovsky 1992",
"ref_id": "BIBREF5"
},
{
"start": 434,
"end": 455,
"text": "and Pustejovsky 1990]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Construction",
"sec_num": null
},
{
"text": "This final stage is also a Prolog program . This takes as input the lists of triples produced by the previou s stage and a list of every name found in the text . It then produces the final template, introducing cros s references between serially defined fields which are related to each other . The name list is used to attemp t to choose the fullest version of a name found in the text and substitute this for any shorter versions foun d in the template outline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Formattin g",
"sec_num": null
},
{
"text": "MucBruce generates four templates for this text . All are related to the vehicle bomb described at th e beginning of the text . The template and relevant paragraphs filters produce the following predictions : 4, ['NO', 'ARSON', 'NO', 'ATTACK', 'YES', 'BOMBING', 'NO' , 'KIDNAPPING', 'NO', 'ROBBERY', 'NO', 'DUMMY'] ) . rel_paras ([[1,3,5,6,13,18,19,20],'ARSON' , [1,2,3,4,5,6,7,8,9,10,11,12,13,14,16,17,18,19,20,21],'ATTACK' , [1,3,4,5,6,7,8,9,10,11,13,14,16,17,18,19,20],'BOMBING' , [1,3,6,7,16,17,20],'KIDNAPPING', [19,20],'ROBBERY', [],'DUMMY'] ) .",
"cite_spans": [
{
"start": 209,
"end": 211,
"text": "4,",
"ref_id": null
},
{
"start": 212,
"end": 218,
"text": "['NO',",
"ref_id": null
},
{
"start": 219,
"end": 227,
"text": "'ARSON',",
"ref_id": null
},
{
"start": 228,
"end": 233,
"text": "'NO',",
"ref_id": null
},
{
"start": 234,
"end": 243,
"text": "'ATTACK',",
"ref_id": null
},
{
"start": 244,
"end": 250,
"text": "'YES',",
"ref_id": null
},
{
"start": 251,
"end": 261,
"text": "'BOMBING',",
"ref_id": null
},
{
"start": 262,
"end": 268,
"text": "'NO' ,",
"ref_id": null
},
{
"start": 269,
"end": 282,
"text": "'KIDNAPPING',",
"ref_id": null
},
{
"start": 283,
"end": 288,
"text": "'NO',",
"ref_id": null
},
{
"start": 289,
"end": 299,
"text": "'ROBBERY',",
"ref_id": null
},
{
"start": 300,
"end": 305,
"text": "'NO',",
"ref_id": null
},
{
"start": 306,
"end": 314,
"text": "'DUMMY']",
"ref_id": null
}
],
"ref_spans": [
{
"start": 329,
"end": 552,
"text": "([[1,3,5,6,13,18,19,20],'ARSON' , [1,2,3,4,5,6,7,8,9,10,11,12,13,14,16,17,18,19,20,21],'ATTACK' , [1,3,4,5,6,7,8,9,10,11,13,14,16,17,18,19,20],'BOMBING' , [1,3,6,7,16,17,20],'KIDNAPPING', [19,20],'ROBBERY', [],'DUMMY']",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "TST2-MUC4-004 8",
"sec_num": null
},
{
"text": "slot(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2-MUC4-004 8",
"sec_num": null
},
{
"text": "This means that only 4 BOMBING templates will be produced . The first of these produces a reasonably complete match to the key ; details on the driver and bodyguards are omitted . The remaining three template s are incorrect, carrying only the information that a bombing has taken place. The attack on the home i s not identified by our naive method of multiple template generation, as it already occurs in a sequence o f paragraphs in which only the first event is found .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2-MUC4-004 8",
"sec_num": null
},
{
"text": "We feel that our present system, given its only partially completed state, shows potential . In particular th e following techniques seem generally useful :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The recognition of text types and sub-texts within a text using statistical techniques trained on larg e numbers of sample texts .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The use of the key templates to derive system lexicons .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The automatic seeding of lexical structures from machine readable dictionaries .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The use of lexically-driven cospecification to provide a robust parsing method at the sentence level .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The successful combination of a variety of techniques in the human name recognizer .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "\u2022 The production of a number of independent tools for tagging texts .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "The system is robust and provides a good starting point for the application of more sophisticated techniques . Given appropriate data it should be possible to produce a similar system for a different domain in a matter of weeks . The tagger software is already being adapted to Japanese and we have already establishe d that we can achieve similar performance with the statistical methods for Japanese texts using characte r bigrams .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
}
],
"back_matter": [
{
"text": "The system described here has been created using work funded by DARPA under contract number MDA904-91-C-9328 . The following colleagues at CRL and Brandeis have contributed time, ideas, programming ability and enthusiasm to the development of the MucBruce system ; Federica Busa, Peter Dilworth, Ted Dunning , Eric Eiverson, Steve Helmreich, Wang Jin, Fang Lin, Bill Ogden, Gees Stein, and Takahiro Waka o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGEMENTS",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "THE FARABUNDO MARTI NATIONAL LIBERATION FRONT",
"authors": [
{
"first": "'el Salvador : San",
"middle": [],
"last": "Salvador",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "1, '6' ,null] , [4, 'ATTACK ',null] , [2,'19 APR 89',null] , [3,'EL SALVADOR : SAN SALVADOR (CITY)',null] , [6, 'null' ,\"BOMB\"] , [7, 'BOMB' ,null] , [18, 'null' , \"ROBERTO GARCIA ALVARADO\"] , [8, 'TERRORIST ACT' ,null] , [9, 'null' ,\"TERRORIST\"] , [10, 'null ' , \"THE FARABUNDO MARTI NATIONAL LIBERATION FRONT\"] , [12, 'null' ,\"VEHICLE\"] , [13, 'TRANSPORT VEHICLE' ,null] , [19, 'null' ,\"GENERAL\"] , [20, 'MILITARY' ,null] , [21, 'null' ,null] , [5, 'ACCOMPLISHED' ,null] , [16,'-',null] , [23, 'DEATH' ,null] ]",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Application of Lexical Semantics to Knowledge Acquisitio n from Corpora",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anick",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of Coling 90",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anick, Peter and Pustejovsky, J . (1990) . An Application of Lexical Semantics to Knowledge Acquisitio n from Corpora. Proceedings of Coling 90, Helsinki, Finland .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Some Comments on Document Classification by Machine",
"authors": [
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Elbert",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 1991,
"venue": "Computer and Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guthrie, Louise and Elbert Walker (1991) . Some Comments on Document Classification by Machine . Mem- orandum in Computer and Cognitive Science, MCCS-92-935, Computing Research Laboratory, New Mexic o State University, New Mexico .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Longman Dictionary of Contemporary English",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Proctor",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"F"
],
"last": "Ilson",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ayto",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proctor, Paul, Robert F . Ilson, John Ayto, et al . (1978) . Longman Dictionary of Contemporary English , Longman Group Limited : Harlow, Essex, England .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, James (1991) \"The Generative Lexicon,\" Computational Linguistics, 17 .4, 1991 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Acquisition of Lexical Semantic Knowledge from Large Corpora",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the DARPA Spoken and Written Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, James (1992) \"The Acquisition of Lexical Semantic Knowledge from Large Corpora \" , in Pro- ceedings of the DARPA Spoken and Written Language Workshop, Arden House, New York, February, 1992 , Morgan Kaufmann .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Providing Machin e Tractable Dictionary Tools",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fass",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "C-M",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Plate",
"suffix": ""
},
{
"first": "B",
"middle": [
"M"
],
"last": "Slator",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilks, Y ., Fass, D ., C-M ., Guo, McDonald, J . E ., Plate, T . and Slator, B .M . 1990 . \"Providing Machin e Tractable Dictionary Tools,\" in Machine Translation, 5 .1, 1990 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "MucBruce -System Overvie w"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "<\\date> 5 DAYS AGO {date(\"14 APR 89\",890414)} <\\enddate >"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "MucBruce -Tagging Pipeline to allow subsequent grouping by following stages . These temporary markers are removed by the filter stages . Each text is marked as follows : Known Items Places, organizations, physical targets, human occupations, weapons . Proper Names Human proper names . Dates All standard date forms and other references to time. Closed class Prepositions, determiners and conjunctions . Residue All other words are marked as unknown . The final tagged text looks like this : <\\name> GARCIA ALVARADO <\\endname>, <\\num> 56 {num(56)} <\\endnum> , <\\cs> WAS {closed(was,[pastv])} <\\endcs> <\\gls> KILLED {action(killed,'ATTACK')} <\\endgls> <\\cs> WHEN {closed(when,[conj,pron])} <\\endcs> <\\cs> A {closed(a,[determiner]) } <\\endcs> <\\weapon> BOMB {type(['BOMB'])} <\\endweapon> <\\res> PLACE D {atom(placed)} <\\endres > <\\cs> BY {closed(by,[prep])} <\\endcs> <\\res> URBAN {atom(urban)} <\\endres > <\\organ> GUERRILLAS {type(['TERRORIST', 'NOUN' ])} <\\endorgan> <\\cs > ON {closed(on,[prep])} <\\endcs> <\\cs> HI S {closed(his,[determiner,pron])} <\\endcs> <\\target> VEHICL E {type(['TRANSPORT VEHICLE'])} <\\endtarget> <\\gls> EXPLODE D {action(exploded,'BOMBING')} <\\endgls> <\\cs> A S {closed(as,[conj,pron,prep])} <\\endcs> <\\cs> IT {closed(it,[pron]) } <\\endcs> <\\res> CAME {atom(came)} <\\endres > <\\cs> TO {closed(to,[prep])} <\\endcs> <\\cs> A {closed(a,[determiner]) } <\\endcs> <\\res> HALT {atom(halt)} <\\endres > <\\cs> AT {closed(at,[prep])} <\\endcs> <\\cs> AN {closed(an,[determiner])} <\\endcs > <\\res> INTERSECTION {atom(intersection)} <\\endres> <\\cs> IN {closed(in,[prep]) } <\\endcs> <\\res> DOWNTOWN {atom(downtown)} <\\endres> <\\place> SAN SALVADO R {type([['CITY','EL SALVADOR'],['DEPARTMENT','EL SALVADOR']])} <\\endplace> ."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "\",closed(his,[determiner,pron])), target(\"VEHICLE\",type(['TRANSPORT VEHICLE'])) , gis(\"EXPLODED\",action(exploded,'BOMBING')), cs(\"AS\",closed(as,[conj,pron,prep])) , cs(\"IT\",closed(it,[pron])), res(\"CAME\",atom(came)) , cs(\"TO\",closed(to,[prep])), cs(\"A\",closed(a,[determiner])) , res(\"HALT\",atom(halt)), cs(\"AT\",closed(at,[prep])) , cs(\"AN\",closed(an,[determiner])), res(\"INTERSECTION\",atom(intersection)) , cs(\"IN\",closed(in,[prep])), res(\"DOWNTOWN\",atom(downtown)) , place(\"SAN SALVADOR\",type([['CITY','EL SALVADOR'],['DEPARTMENT','EL SALVADOR']])),' .']) ."
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "self , \"*\" , \"WITH\" , np(I1)])]) , sem([type ('AMOK '),perp(H1),hum_tgt(H2),last (I1),hum_tgt_eff('DEATH')]"
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "TGT : EFFECT OF INCIDENT DEATH : \"ROBERTO GARCIA ALVARADO \" 24 . HUM TGT : TOTAL NUMBER * _Table 5 : One of Four Templates Generated for TST2-MUC4-004 8"
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "A FLAG FROM THE <\\organ> MANUEL RODRIGUEZ' PATRIOTIC FRONT Itvpef[TERRORIST '; ' NAME' DI <\\endorgan> (<\\organ> FPMR Itypef[TERRORIST ', ' NAME 111 <\\endorgan> ) WAS FOUN D AT THE SCENE OF THE EXPLOSION. THE<\\organ> FPMR Itypet(TERRORIST, 'NAME' DI <\\endorgan> IS A CLANDESTINE LEFTIS T <\\organ GROUP ltypei('OTHER NOUN Di <\\endorgan> THAT PROMOTES \"ALL FORMS O F STRUGGLE\"AGAINST THE <\\organ> MILITAR Y",
"content": "<table><tr><td>Input file: TST2-:MLC.-0002</td><td>Start</td><td>Overview : : Qui t</td></tr><tr><td/><td>Tagger</td><td/></tr><tr><td>i Know m</td><td/><td/></tr><tr><td>+ I Itemsz</td><td/><td/></tr><tr><td>Inpu t</td><td/><td/></tr><tr><td>Itype((' MILITARY: 'NOUN'. Dl &lt;\\endorgan &lt;\\organ&gt; GOVERNMENT Itype(CGOVERNMENT' , 'NOUN ' &lt;\\human&gt; POLICE Itype(('LAW ENFORCEM E NT' REPORTED THAT THE EXPLOSION CAUSED SERIOUS .'NOUN'DI &lt;\\endhuman&gt; SOURCES HAV E</td><td colspan=\"2\">A FLAG FROM THE &lt;\\organ&gt; MANUEL RODRIGUEZ PATRIOTIC FRONT Itypel[TERRORIST , ' NAME' DI &lt;\\endorgan&gt; (&lt;\\organ&gt; FP'I R \u00a3typet(TERRORIST. .'NAME' DI &lt;\\endorgan&gt; ) WAS FOUN D AT THE SCENE OF THE EXPLOSION. THE &lt;\\organ &gt; FPMR type((TERRORIST '.'NAME DI&lt;\\endorgan &gt; Properl IS A CLANDESTINE LEFTIST Names &lt;\\organ&gt; GROUP Itypeb1OTHEK, iNOUN`DI &lt;\\endorgan&gt; THAT PROMOTES 'ALL FORMS O F STRUGGLE\" AGAINST THE &lt;\\organ&gt;MILITARY \u00a3type(CM ILI TARY' . 'NOUN' DI &lt;\\endorgan&gt; &lt;\\organ&gt; GOVERNMENT ItypelrGOVERNMENT , 'NOUN' DI &lt;\\endorgan&gt; esuHEADED= BY &lt;\\human GENERAL Itypef[' MILITARY', 'NOUN; ' RANK 'DI &lt;\\endhuman&gt; -.nAUOUSTO= =suPINOCHET ..</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">: Part of Non-Relevant Text Word Lis t</td></tr><tr><td>FREQUENCY</td><td>WORD</td></tr><tr><td/><td>BOM B</td></tr><tr><td/><td>EXPLOSIO N</td></tr><tr><td/><td>INJURE D</td></tr><tr><td/><td>EXPLODED</td></tr><tr><td/><td>DYNAMIT E</td></tr><tr><td/><td>CA R</td></tr><tr><td/><td>BOMBS</td></tr><tr><td/><td>STREET</td></tr><tr><td/><td>PLACE D</td></tr><tr><td/><td>DAMAGED</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td>: Part of Relevant Template Word List : BOMBIN G</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "....... . ...... . ....... ................... .. ........ ....... ......",
"content": "<table><tr><td>\" MUCBr uca' [8L-NMSU/Brandei s</td><td/></tr><tr><td>Releven t</td><td/></tr><tr><td>Templates</td><td/></tr><tr><td/><td>Template</td></tr><tr><td>Tagge r</td><td>Formate r</td></tr><tr><td>(click to view)</td><td/></tr></table>",
"html": null
}
}
}
}