Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
41.9 kB
{
"paper_id": "A00-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:12:39.382616Z"
},
"title": "REES: A Large-Scale Relation and Event Extraction System",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": "",
"affiliation": {},
"email": "aonec@verdi.sra.com"
},
{
"first": "Mila",
"middle": [],
"last": "Ramos-Santacruz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Artifact-Name&aliases",
"middle": [],
"last": "Artifact-Type",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Artifact-Subtype",
"middle": [],
"last": "Artifact-Descriptor",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Person-Affiliation",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on a large-scale, end-toend relation and event extraction system. At present, the system extracts a total of 100 types of relations and events, which represents a much wider coverage than is typical of extraction systems. The system consists of three specialized pattem-based tagging modules, a high-precision coreference resolution module, and a configurable template generation module. We report quantitative evaluation results, analyze the results in detail, and discuss future directions.",
"pdf_parse": {
"paper_id": "A00-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on a large-scale, end-toend relation and event extraction system. At present, the system extracts a total of 100 types of relations and events, which represents a much wider coverage than is typical of extraction systems. The system consists of three specialized pattem-based tagging modules, a high-precision coreference resolution module, and a configurable template generation module. We report quantitative evaluation results, analyze the results in detail, and discuss future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One major goal of information extraction (IE) technology is to help users quickly identify a variety of relations and events and their key players in a large volume of documents. In contrast with this goal, state-of-the-art information extraction systems, as shown in the various Message Understanding Conferences (MUCs), extract a small number of relations and events. For instance, the most recent MUC, MUC-7, called for the extraction of 3 relations (person-employer, maker-product, and organization-location) and 1 event (spacecraft launches). Our goal is to develop an IE system which scales up to extract as many types of relations and events as possible with a minimum amount of porting effort combined with high accuracy. Currently, REES handles 100 types of relations and events, and it does so in a modular, configurable, and scalable manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Below, Section 1 presents the ontologies of relations and events that we have developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Section 2 describes REES' system architecture. Section 3 evaluates the system's performance, and offers a qualitative analysis of system errors. Section 4 discusses future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "As the first step in building a large-scale relation and event extraction system, we developed ontologies of the relations and events to be extracted. These ontologies represent a wide variety of domains: political, financial, business, military, and life-related events and relations. \"Relations\" covers what in MUC-7 are called Template Elements (TEs) and Template Relations (TRs). There are 39 types of relations. While MUC TE's only dealt with singular entities, REES extracts both singular and plural entities (e.g., \"five executives\"). The TR relations are shown in italic in the table below. Table 1 : Relation Ontology \"Events\" are extracted along with their event participants, e.g., \"who did what to whom when and where?\" For example, for a BUYING event, REES extracts the buyer, the artifact, the seller, and the time and location of the BUYING event.",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 606,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation and Event Ontologies",
"sec_num": "1"
},
{
"text": "REES currently covers 61 types of events, as shown below. \"May 17, 1987\" PLACE:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Person-OtherRelative Person-BirthPlace Person-BirthDate",
"sec_num": null
},
{
"text": "[TE for \"the gulf'] COMMENT: \"attacked\" REES consists of three main components: a tagging component (cf. Section 2.1), a co-reference resolution module (cf. Section 2.2), and a template generation module (cf. Section 2.3). Figure 3 also illustrates that the user may run REES from a Graphical User Interface (GUI) called TemplateTool (cf. Section 2.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 231,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Events",
"sec_num": null
},
{
"text": "The tagging component consists of three modules as shown in Figure 3 : NameTagger, NPTagger and EventTagger. Each module relies on the same pattern-based extraction engine, but uses different sets of patterns. The NameTagger recognizes names of people, organizations, places, and artifacts (currently only vehicles).",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tagging Modules",
"sec_num": "2.1"
},
{
"text": "remplateroot //v -':.v\" . .......",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging Modules",
"sec_num": "2.1"
},
{
"text": "The NPTagger then takes the XML-tagged output of the NameTagger through two phases. First, it recognizes non-recursive Base Noun Phrase (BNP) (our specifications for BNP resemble those in Ramshaw and Marcus 1995) . Second, it recognizes complex NPs for only the four main semantic types of NPs, i.e., Person, Organization, Location, and Artifact (vehicle, drug and weapon). It makes postmodifier attachment decisions only for those NPs that are crucial to the extraction at hand. During this second phase, relations which can be recognized locally (e.g., Age, Affiliation, Maker) are also recognized and stored using the XML attributes for the NPs. For instance, the XML tag for \"President of XYZ Corp.\"",
"cite_spans": [
{
"start": 188,
"end": 212,
"text": "Ramshaw and Marcus 1995)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3: The REES System Architecture",
"sec_num": null
},
{
"text": "below holds an AFFILIATION attribute with the ID for \"XYZ Corp.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3: The REES System Architecture",
"sec_num": null
},
{
"text": "<PNP ID=\"03\" AFFILIATION=\"O4\">President of <ENTITY ID=\"04\">XYZ Corp.</ENTITY>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3: The REES System Architecture",
"sec_num": null
},
{
"text": "Building upon the XML output of the NPTagger, the EventTagger recognizes events applying its lexicon-driven, syntactically-based generic patterns. These patterns tag events in the presence of at least one of the arguments specified in the lexical entry for a predicate. Subsequent pattems try to find additional arguments as well as place and time adjunct information for the tagged event. As an example of the EventTagger's generic patterns, consider the simplified pattern below. This pattem matches on an event-denoting verb that requires a direct object of type weapon (e.g., \"fire a gun\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "</PNP>",
"sec_num": null
},
{
"text": "(& {AND $VP {ARG2_SYN=DO} {ARG2_SEM=WEAPON } } {AND $ARTIFACT {SUBTYPE=WEAPON} })1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "</PNP>",
"sec_num": null
},
{
"text": "The important aspect of REES is its declarative, lexicon-driven approach. This approach requires a lexicon entry for each event-denoting word, which is generally a I &=concatenation, AND=Boolean operator, $VP and SARTIFACT are macro references for complex phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "</PNP>",
"sec_num": null
},
{
"text": "71:1 verb. The lexicon entry specifies the syntactic and semantic restrictions on the verb's arguments. For instance, the following lexicon entry is for the verb \"attack.\" It indicates that the verb \"attack\" belongs to the CONFLICT ontology and to the ATTACK_TARGET type. The first argument for the verb \"attack\" is semantically an organization, location, person, or artifact (ARGI_SEM), and syntactically a subject (ARGI_SYN). The second argument is semantically an organization, location, person or artifact, and syntactically a direct object. The third argument is semantically a weapon and syntactically a prepositional phrase introduced by the preposition \"with\". This generic, lexicon-driven event extraction approach makes REES easily portable because new types of events can be extracted by just adding new verb entries to the lexicon. No new patterns are required. Moreover, this approach allows for easy customization capability: a person with no knowledge of the pattern language would be able to configure the system to extract new events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "</PNP>",
"sec_num": null
},
{
"text": "While the tagging component is similar to other pattern-based IE systems (e.g., Appelt et al. 1995; Aone et al. 1998, Yangarber and Grishman 1998) , our EventTagger is more portable through a lexicon-driven approach.",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "Appelt et al. 1995;",
"ref_id": "BIBREF1"
},
{
"start": 100,
"end": 131,
"text": "Aone et al. 1998, Yangarber and",
"ref_id": null
},
{
"start": 132,
"end": 146,
"text": "Grishman 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "</PNP>",
"sec_num": null
},
{
"text": "After the tagging phase, REES sends the XML output through a rule-based co-reference resolution module that resolves:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "2.2"
},
{
"text": "\u2022 definite noun phrases of Organization, Person, and Location types, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "2.2"
},
{
"text": "\u2022 singular person pronouns: he and she.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "2.2"
},
{
"text": "Only \"high-precision\" rules are currently applied to selected types of anaphora. That is, we resolve only those cases of anaphora whose antecedents the module can identify with high confidence. For example, the pronoun rules look for the antecedents only within 3 sentences, and the definite NP rules rely heavily on the head noun matches. Our highprecision approach results from our observation that unless the module is very accurate (above 80% precision), the coreference module can hurt the overall extraction results by over-merging templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "2.2"
},
{
"text": "A typical template generation module is a hard-coded post-processing module which has to be written for each type of template. By contrast, our Template Generation module is unique as it uses declarative rules to generate and merge templates automatically so as to achieve portability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generation Module",
"sec_num": "2.3"
},
{
"text": "REES outputs the extracted information in the form of either MUC-style templates, as illustrated in Figure 1 and 2, or XML. A crucial part of a portable, scalable system is to be able to output different types of relations and events without changing the template generation code. REES maps XML-tagged output of the co-reference module to templates using declarative template definitions, which specifies the template label (e.g., ATTACK_TARGET), XML attribute names (e.g., ARGUMENT l), corresponding template slot names (e.g., ATTACKER), and the type restrictions on slot values (e.g., string).",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Declarative Template Generation",
"sec_num": "2.3.1"
},
{
"text": "One of the challenges of event extraction is to be able to recognize and merge those event descriptions which refer to the same event. The Template Generation module uses a set of declarative, customizable rules to merge coreferring events into a single event. Often, the rules reflect pragmatic knowledge of the world. For example, consider the rule below for the DYING event type. This rule establishes that if two die events have the same subject, then they refer to the same event (i.e., a person cannot die more than once ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Merging",
"sec_num": "2.3.2"
},
{
"text": "For some applications such as database population, the user may want to validate the system output. REES is provided with a Javabased Graphical User Interface that allows the user to run REES and display, delete, or modify the system output. As illustrated in Figure 4 , the tool displays the templates on the bottom half of the screen, and the user can choose which template to display. The top half of the screen displays the input document with extracted phrases in different colors. The user can select any slot value, and the tool will highlight the portion of the input text responsible for the slot value. This feature is very useful in efficiently verifying system output. Once the system's output has been verified, the resulting templates can be saved and used to populate a database.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graphical User Interface (GUI)",
"sec_num": "2.4"
},
{
"text": "The The blind set F-Measure for 31 types of relations (73.95%) exceeded our initial goal of 70%. While the blind set F-Measure for 61 types of events was 53.75%, it is significant to note that 26 types of events achieved an F-Measure over 70%, and 37 types over 60% (cf . Table 4 ). For reference, though not exactly comparable, the best-performing MUC-7 system achieved 87% in TE, 76% in TR, and 51% in event extraction. Regarding relation extraction, the difference in the score between the training and blind sets was very small. In fact, the total F-Measure on the blind set is less than 2 points lower than that of the training set. It is also interesting to note that for 8 of the 12 relation types where the F-Measure dropped more than 10 points, the training set includes less than 20 instances. In other words, there seems to be a natural correlation between low number of instances in the training set and low performance in the blind set.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 280,
"text": ". Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Graphical User Interface (GUI)",
"sec_num": "2.4"
},
{
"text": "There was a significant drop between the training and blind sets in event extraction: 11 points. We believe that the main reason is that the total number of events in the training set is fairly low: 801 instances of 61 types of events (an average of 13/event), where 35 of the event types had fewer than 10 instances. In fact, 9 out of the 14 event types which scored lower than 40% F-Measure had fewer than I0 examples. In comparison, there were 34,000 instances of 39 types of relations in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphical User Interface (GUI)",
"sec_num": "2.4"
},
{
"text": "The contribution of the co-reference module is illustrated in the table below. Co-reference resolution consistently improves F-Measures both in training and blind sets. Its impact is larger in relation than event extraction. In the next two sections, we analyze both false positives and false negatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphical User Interface (GUI)",
"sec_num": "2.4"
},
{
"text": "REES produced precision errors following cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Positives (or Precision Errors)",
"sec_num": "3.1"
},
{
"text": "\u2022 Most of the errors were due in the to overgeneration of templates. These are mostly cases of co-referring noun phrases that the system failed to resolve. For example: \"Panama ... the nation ... this country.., his",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Positives (or Precision Errors)",
"sec_num": "3.1"
},
{
"text": "Rules for the co-reference module are still under development, and at present REES handles only limited types of plural noun phrase anaphora. Spurious events resulted from verbs in conditional constructions (e.g., \"if ... then...\") or from ambiguous predicates. For instance, \"appoint\" as a POLITICAL event vs. a PERSONNEL CHANGE event. The subject of a verb was misidentified. This is particularly frequent in reduced relative clauses. Kabul radio said the latest deaths brought to 38 the number of people killed in the three car bomb explosions, (Wrong subject: \"the number of people\" as the KILLER instead of the victim)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "country\"",
"sec_num": null
},
{
"text": "Below, we list the most frequent recall errors in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Negatives (or Recall Errors)",
"sec_num": "3.2"
},
{
"text": "\u2022 Some event arguments are mentioned with event nouns instead of event verbs. The current system does not handle noun-based event extraction. India's acquisition last month of the nuclear submarine from the Soviet Union... (SELLER=\"Soviet Union\" and TIME=\"last month'\" come with the nounbased event \"acquisition.\") \u2022 Pronouns \"it\" and \"they,\" which carry little semantic information, are currently not resolved by the co-reference module. We asked a person who is not involved in the development of REES to review the event extraction output for the blind set. This person reported that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Negatives (or Recall Errors)",
"sec_num": "3.2"
},
{
"text": "\u2022 In 35% of the cases where the REES system completely missed an event, it was because the lexicon was missing the predicate. REES's event predicate lexicon is rather small at present (a total of 140 verbs for 61 event types) and is mostly based on the examples found in the training set, \u2022 In 30% of the cases, the subject or object was elliptical. The system does not currently handle ellipsis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Negatives (or Recall Errors)",
"sec_num": "3.2"
},
{
"text": "\u2022 In 25% of the cases, syntactic/semantic argument structures were missing from existing lexical entries. It is quite encouraging that simply adding additional predicates and predicate argument structures to the lexicon could significantly increase the blind set performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Negatives (or Recall Errors)",
"sec_num": "3.2"
},
{
"text": "We believe that improving co-reference resolution and adding noun-based event extraction capability are critical to achieving our ultimate goal of at least 80% F-Measure for relations and 70% for events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "4"
},
{
"text": "As discussed in Section 3.1 and 3.2, accurate co-reference resolution is crucial to improving the accuracy of extraction, both in terms of recall and precision. In particular, we identified two types of high-payoff coreference resolution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "4.1"
},
{
"text": "\u2022 definite noun phrase resolution, especially plural noun phrases",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "4.1"
},
{
"text": "\u2022 3 rd person neutral pronouns \"it\" and \"they.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-reference Resolution",
"sec_num": "4.1"
},
{
"text": "REES currently handles only verb-based events. Noun-based event extraction adds more complexity because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun-based Event Extraction",
"sec_num": "4.2"
},
{
"text": "Nouns are often used in a generic, nonreferential manner (e.g., \"We see a merger as being in the consumer's interest\"), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun-based Event Extraction",
"sec_num": "4.2"
},
{
"text": "When referential, nouns often refer to verb-based events, thus requiring nounverb co-reference resolution (\"An F-14 crashed shortly after takeoff... The crash\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun-based Event Extraction",
"sec_num": "4.2"
},
{
"text": "However, noun-based events are crucial because they often introduce additional key information, as the underlined phrases below indicate: While Bush's meetings with prominent antiapartheid leaders such as Archbishop Desmond Tutu and Albertina Sisulu are important...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun-based Event Extraction",
"sec_num": "4.2"
},
{
"text": "We plan to develop a generic set of patterns for noun-based event extraction to complement the set of generic verb-based extraction patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun-based Event Extraction",
"sec_num": "4.2"
},
{
"text": "In this paper, we reported on a fast, portable, large-scale event and relation extraction system REES. To the best of our knowledge, this is the first attempt to develop an IE system which can extract such a wide range of relations and events with high accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "It performs particularly well on relation extraction, and it achieves 70% or higher F-Measure for 26 types of events already. In addition, the design of REES is highly portable for future addition of new relations and events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "System Evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project would have not been possible without the contributions of Arcel Castillo, Lauren Halverson, and Sandy Shinn. Our thanks also to Brandon Kennedy, who prepared the hand-tagged data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SRA: Description of the IE 2 System Used for MUC-7",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Lauren",
"middle": [],
"last": "Halverson",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Hampton",
"suffix": ""
},
{
"first": "Mila",
"middle": [],
"last": "Ramos-Santacruz",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 7thMessage Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aone, Chinatsu, Lauren Halverson, Tom Hampton, and Mila Ramos-Santacruz. 1998. \"SRA: Description of the IE 2 System Used for MUC-7.\" In Proceedings of the 7thMessage Understanding Conference (MUC-7).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SRI International FASTUS System: MUC-6 Test Results and Analysis",
"authors": [
{
"first": "Douglas",
"middle": [
"E"
],
"last": "Appelt",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "Megumi",
"middle": [],
"last": "Israel",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Kameyama",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kehler",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6 th Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Appelt, Douglas E., Jerry R Hobbs, John Bear, David Israel, Megumi Kameyama, Andy Kehler, David Martin, Karen Myers, and Mabry Tyson. 1995. \"SRI International FASTUS System: MUC- 6 Test Results and Analysis.\" In Proceedings of the 6 th Message Understanding Conference (MUC-6).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Text Chunking Using Transformation-Based Learning",
"authors": [
{
"first": "Lance",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 3 rd ACL Workshop on Very Large Corpora (WVLC95)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramshaw, Lance A., and Mitchell P. Marcus. 1995. \"Text Chunking Using Transformation-Based Learning\". In Proceedings of the 3 rd ACL Workshop on Very Large Corpora (WVLC95).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "NYU: Description of the Proteus~PET System as Used for MUC-7 ST",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 6 th Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangarber, Roman and Ralph Grishman. 1998. \"NYU: Description of the Proteus~PET System as Used for MUC-7 ST.\" In Proceedings of the 6 th Message Understanding Conference (MUC-7).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 2: Example of Event Template",
"uris": null,
"num": null
},
"TABREF1": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Figures 1 and 2 show sample relation and event</td></tr><tr><td colspan=\"3\">templates. Figure 1 shows a Person-Affiliation</td></tr><tr><td colspan=\"3\">relation template for \"Frank Ashley, a</td></tr><tr><td colspan=\"3\">spokesman for Occidental Petroleum Corp.'\"</td></tr><tr><td colspan=\"3\">&lt;PERSON AFFILIATION-AP8802230207-54&gt; :=</td></tr><tr><td>TYPE:</td><td colspan=\"2\">PERSON AFFILIATION</td></tr><tr><td colspan=\"3\">PERSON: [TE for\"Frank Ashley\"]</td></tr><tr><td>ORG:</td><td colspan=\"2\">[TE for \"Occidental Petroleum\"]</td></tr><tr><td colspan=\"3\">Figure 1: Example of Relation Template</td></tr><tr><td colspan=\"3\">Figure 2 shows an Attack Target event template</td></tr><tr><td colspan=\"3\">for the sentence \"an Iraqi warplane attacked the</td></tr><tr><td colspan=\"3\">frigate Stark with missiles May 17, 1987. \"</td></tr><tr><td colspan=\"3\">&lt;ATTACK TARGET-AP8804160078-12&gt;: = i</td></tr><tr><td>TYPE:</td><td/><td>CONFLICT</td></tr><tr><td colspan=\"2\">SUBTYPE:</td><td>ATTACK TARGET</td></tr><tr><td colspan=\"3\">ATTACKER: [TE for \"an Iraqi warplane\"]</td></tr><tr><td>TARGET:</td><td/><td>[TE for \"the frigate Stark\"]</td></tr><tr><td colspan=\"2\">WEAPON:</td><td>[TE for \"missiles\"]</td></tr><tr><td>TIME:</td><td/></tr></table>",
"num": null
},
"TABREF4": {
"text": "table below shows the system's recall, precision, and F-Measure scores for the training set (200 texts) and the blind set (208 texts) from about a dozen news sources. Each set contains at least 3 examples of each type of relations and events. As we mentioned earlier, \"relations\" includes MUC-style TEs and TRs.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Text</td><td>Task</td><td>Templates</td><td>R</td><td>P</td><td>F-M</td></tr><tr><td>Set</td><td/><td>in keys</td><td/><td/><td/></tr><tr><td/><td>Rel.</td><td>9955</td><td colspan=\"3\">76 74 75.35</td></tr><tr><td colspan=\"2\">Train Events</td><td>2525</td><td colspan=\"3\">57 74 64.57</td></tr><tr><td/><td>Rel. &amp;</td><td>10707</td><td colspan=\"3\">74 74 73.95</td></tr><tr><td/><td>Events</td><td/><td/><td/><td/></tr><tr><td/><td>Rel.</td><td>8938</td><td colspan=\"3\">74 74 73.74</td></tr><tr><td colspan=\"2\">Blind Events</td><td>2020</td><td colspan=\"3\">42 75 53.75</td></tr><tr><td/><td>Rel. &amp;</td><td>9526</td><td colspan=\"3\">69 74 71.39</td></tr><tr><td/><td>Events</td><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF5": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF7": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF9": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}