Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
41.7 kB
{
"paper_id": "M91-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:28.478507Z"
},
"title": "BBN: Description of the PLUM System as Used for MUC-3",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologie s",
"location": {
"addrLine": "10 Moulton Stree t",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "weischedel@bbn.com"
},
{
"first": "Damaris",
"middle": [],
"last": "Ayuso",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologie s",
"location": {
"addrLine": "10 Moulton Stree t",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Sean",
"middle": [],
"last": "Boisen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologie s",
"location": {
"addrLine": "10 Moulton Stree t",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ingria",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologie s",
"location": {
"addrLine": "10 Moulton Stree t",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Palmucci",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologie s",
"location": {
"addrLine": "10 Moulton Stree t",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M91-1021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Traditional approaches to the problem of extracting data from texts have emphasized handcrafted linguisti c knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditiona l linguistic techniques . Our research and development goals are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "\u2022 more rapid development of new applications ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "\u2022 the ability to train (and re-train) systems based on user markings of correct and incorrect output , \u2022 more accurate selection among interpretations when more than one is found, an d \u2022 more robust partial interpretation when no complete interpretation can be found .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "We have previously performed experiments on components of the system with texts from the Wall Stree t Journal, however, the MUC-3 task is the first end-to-end application of PLUM . All components except parsin g were developed in the last 5 months, and cannot therefore be considered fully mature. The parsing component, th e MIT Fast Parser [4] , originated outside BBN and has a more extensive history prior to MUC-3 .",
"cite_spans": [
{
"start": 342,
"end": 345,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "A central assumption of our approach is that in processing unrestricted text for data extraction, a non-trivia l amount of the text will not be understood . As a result, all components of PLUM are designed to operate on partiall y understood input, taking advantage of information when available, and not failing when information is unavailable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "The following section describes the major PLUM components .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": null
},
{
"text": "The PLUM architecture is presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SYSTEM ARCHITECTUR E",
"sec_num": null
},
{
"text": "The input to the system is a file containing one or more messages. The sectioning module determines message boundaries, identifies the header, and determines paragraph and sentence boundaries. In addition, we have built a preprocessor which classifies text according to its relevance and topic . We expect this component to allow th e system to ignore paragraphs that are irrelevant and to focus on those that contain relevant information, greatl y increasing the efficiency of the overall system . Time constraints did not permit us to integrate this approach with the rest of our system, however; it was therefore not used for the MUC-3 task .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessin g",
"sec_num": null
},
{
"text": "The first phase of the text processing is assignment of part-of-speech information . In our current system, we use the MIT Fast Parser [4] . In the MITFP, a bi-gram probability model, frequency models for known words (derived from large corpora) and heuristics based on word endings for unknown words, assign part of speech to the highly ambiguous words of the corpus . l Since the MITFP predictions for unknown words were very inaccurate fo r input that is all upper case, we augmented this part-of-speech tagging with probabilistic models (automaticall y l We are now in the process of integrating BBN's POST probabilistic part-of-speech tagger [8] for the tagger i n MITFP. trained) for recognizing words of Spanish origin and words of English origin . This allowed us to tag new words tha t were actually Latin American names highly reliably . The Spanish classifier uses a 5 character hidden Markov model, trained on about 30,000 words of Spanish text . The five-gram model of words of English was derived from text from the Wall Street Journal . ",
"cite_spans": [
{
"start": 135,
"end": 138,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 647,
"end": 650,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": null
},
{
"text": "Each sentence identified by the sectioning module is passed to the parsing component . The MITFP is a deterministic stochastic parser which does not attempt to generate a single syntactic interpretation of the whol e sentence, rather, it generates one or more parse fragments spanning the input sentence, deferring difficult decision s on attachment ambiguities . Consequently, every sentence is assigned some (set of) syntactic interpretations , producing an average of seven fragments for sentences of the complexity seen in the MUC-3 corpus .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": null
},
{
"text": "Here are the parse fragments generated by MITFP for the second sentence of message 99 in the TST1 corpus , \"THE BOMBS CAUSED DAMAGE BUT NO INJURIES\" (the full text of the message is in Appendix H) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": null
},
{
"text": "(\"THE BOMBS CAUSED DAMAGE\" (S (NP (DET \"THE\") (N \"BOMBS\") ) (VP (AUX) (VP (V \"CAUSED \" ) (NP (N \"DAMAGE\"))))) (\"BUT \" (CONJ \"BUT\") ) (\"NO INJURIES\" (NP (DET \"NO\")(N \"INJURIES\")))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": null
},
{
"text": "( tI .~( PUNCT \" .\") )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": null
},
{
"text": "The semantic interpreter operates on each fragment produced by MITFP in a bottom-up, compositional fashion . Throughout the system, defaults are provided so that missing semantic information or rules do not produce errors , but simply mark semantic elements or relationships as unknown . This is consistent with our belief that partia l understanding has to be a key element of text processing systems, and missing data has to be regarded as a norma l event. This entry indicates that the domain model concept is BOMBING, that a subject argument whose type is PEOPLE should be given the role TI-PERP-OF, and that an object argument of any type should be given the role OBJECT -OF . BOMB-V-1 is the unique identifier of this word sense .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "The semantic rules are based on general syntactic patterns, using wildcards and similar mechanisms to provid e an extra measure of robustness . The basic elements of our semantic representation are \"semantic forms\", each o f which introduces a variable (e .g . ? 13) with a type taken from the domain model, and a collection of predicate s pertaining to that variable .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "There are three basic types of semantic forms : entities of the domain, events, and states of affairs . Each of these three can be further categorized as known, unknown, and referential . Entities correspond to the people, places , things, and time intervals of the domain . These are related in important ways, such as through events (who did wha t to whom) and states of affairs (properties of the entities) . Entity descriptions typically arise from noun phrases ; events and states of affairs may be described in clauses .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "Not everything that is represented in the semantics has actually been understood . For example, the predicate PP-MODIFIER indicates that two entities (expressed as noun phrases) are connected via a certain preposition . In this way, we have a \"placeholder\" for the information that a certain structural relation holds between these two items , even though we do not know what the actual semantic relation is . Sometimes understanding the relation more fully is of no consequence, since the information does not contribute to the template-filling task . The information is maintained, however, so that later expectation-driven processing can use it if necessary .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "Here is a semantic rule which handles, for example, \"group of businessmen\", \"murder of a man\", and \"terrorists of the FMLN\" :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "For an NP dominating an NP1, and a PP whose PREP is \"OF\" and which dominates NP2: If NP1 is in (\"GROUP, \"BAND\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "; return semantics of NP2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "If NP1 is an EVENT of type TERRORIST ; make NP2 the OBJECT-OF NP1 and return resul t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "If type of NP1 is PEOPLE and type of NP2 is ORGANIZATION, merge semantics, showing that NP ! BELONGS-TO NP2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "otherwise use a more general NP => NP PP rule An important consequence of the fragmentation produced by MITFP is that top-level constituents are typically more shallow and less varied than full sentence parses . As a result, more semantics coverage was obtained early o n in the development process with few semantic rules than would have been expected if the system had had to cove r widely varied syntactic structures before producing any semantic structures . In this way, semantic coverage was added gradually, while the rest of the system was progressing in parallel .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "Another novel aspect of our use of the MITFP was in combining its output fragments . After having assigned semantic representations to the fragments, it is often possible to make some of the attachment decisions deferred b y the MITFP. For example, it is possible to combine two NPs of compatible semantic types that are conjoined, o r attach prepositional phrases preferentially, using information automatically derived from a corpus [7] . While w e lacked sufficient time to pursue this as fully as we would have liked, we did use this for certain proper nam e constructions, and anticipate using further fragment combining strategies as our semantic coverage increases . Figure 2 shows a graphical version of the semantics generated for the first fragment of sentence 1 in message 99 :",
"cite_spans": [
{
"start": 435,
"end": 438,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 674,
"end": 682,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Interpreter",
"sec_num": null
},
{
"text": "\"PRC \" description-of: \"THE PRC \" clet :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"POLICE HAVE REPORTED THAT TERRORISTS TONIGHT BOMBE D THE EMBASSIES OF THE PRC\" ENTITY --PERSON social-role-of: LAW ENFORCEMENT number-of: PLURAL description-of: \"POLICE \" ENTITY --PERSON social-role-of: TERRORISM number-of: PLURAL description-of: \"TERRORISTS\" ENTITY --COUNTRY I name-of:",
"sec_num": null
},
{
"text": "\"THE\" canonical-name-of: \"PEOPLES REPUBLIC OF CHINA\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"POLICE HAVE REPORTED THAT TERRORISTS TONIGHT BOMBE D THE EMBASSIES OF THE PRC\" ENTITY --PERSON social-role-of: LAW ENFORCEMENT number-of: PLURAL description-of: \"POLICE \" ENTITY --PERSON social-role-of: TERRORISM number-of: PLURAL description-of: \"TERRORISTS\" ENTITY --COUNTRY I name-of:",
"sec_num": null
},
{
"text": "In this example note that the prepositional phrase in \"embassies of the PRC\" was not connected properl y semantically, as evidenced by the use of the general \"pp-modifier\" relation . This is because we had no case frame rule for <diplomatic building> of <country> .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2 : Example Semantic Representation",
"sec_num": null
},
{
"text": "\"THE\" pp-modifer: \"OF\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVENT --COMMUNICATIO N agent-of: object-of: I EVENT --BOMBIN G ti-perp-of: object-of: ENTITY --BUILDIN G soical-role-of: DIPLOMATIC number-of: PLURAL description-of: \"THE EMBASSIES\" det :",
"sec_num": null
},
{
"text": "The discourse component of PLUM performs the operations necessary to derive, from the semanti c representation of the fragments in the input message, a high level \"discourse event structure\", or a representation o f the events of interest that occurred in the message . Each event in the discourse event structure is similar in principl e to the notion of a \"frame\", with its corresponding \"slots\" or fields . There is a correspondence between a discours e event and the semantics that the semantic interpreter assigns to an event in the text . However, the semantic representation assigned by the interpreter can only include relations contained locally in a fragment (after fragmen t combination) ; the discourse module must infer other long-distance or indirect relations not explicitly found by th e interpreter. The template generator then uses the structures created by the discourse component to generate the fina l templates. Currently only terrorist incidents (and \"possible terrorist incidents\") generate discourse events, since thes e are the core events for MUC-3 template generation . The discourse component is further discussed in the pape r \"Computational Aspects of Discourse in the Context of MUC-3\" in these proceedings .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Processing",
"sec_num": null
},
{
"text": "Two primary structures are created by the discourse processor which are used by the template generator : the discourse predicate-database and the discourse event structure. The database contains all the predicates mentioned in the semantic representation of the message (e .g., that some entity is the object of an event) . It supports unification of semantic variables, so that all the information can be easily retrieved when references in the text are resolved . Any other inferences done by the discourse component also get added to the database . While only one database is produced at present, ideally there should be several, to handle multiple inference paths .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Processing",
"sec_num": null
},
{
"text": "To create the discourse event structure, the discourse component processes each semantic form produced by the interpreter, adding its information to the database and performing reference resolution (currently only pronouns an d proper name references) when needed . When a semantic form for an event of interest is encountered, a discourse event is generated, and any slots already found by the interpreter are filled in the event . This event is then merged with a previous event if they are compatible . This heuristic assumes that the events were derived from repeate d references to a single real event in the text .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Processing",
"sec_num": null
},
{
"text": "Once all the semantic forms have been processed, heuristic rules are applied to fill in any unfilled slots b y looking at text surrounding the forms which triggered a given event. Each filler found is assigned a score based o n where it was found in relation to an event trigger, indicating a higher confidence for fillers found closer to a trigger . This will not always be a valid assumption, but has proved to be a good approximation .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Processing",
"sec_num": null
},
{
"text": "Following is the discourse event structure created by using information in the first three sentences (spanning 2 paragraphs) of message 99: In the example above, a score of 0 indicates the filler was found directly by the semantics ; 4 indicates it wa s found in the same paragraph; and 6 that it was found in an adjacent paragraph . Note that El Salvador, though not i n the text, was introduced by the defmition of San Isidro in the lexicon, which had only been seen previously as a tow n of El Salvador .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Processing",
"sec_num": null
},
{
"text": "The template generator takes the event structure produced by discourse processing and fills out the applicationspecific templates . Clearly much of this process is governed by the specific requirements of the application , considerations which have little to do with linguistic processing . For example, in our domain model, all terroris t incidents have a result, but the MUC-3 task description states that, if the incident type is MURDER, the RESUL T slot is to be left unspecified . The template generator must incorporate these kinds of arbitrary constraints, as well a s deal with the basic details of formatting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "The template generator uses a combination of data-driven and expectation-driven strategies . First th e information in the event structure is used to produce initial values. At this point, values which should be filled i n but are not available in the event structure are supplied from defaults, either from the header (e .g., date and location information) or from reasonable guesses (e.g . that the object of a murder is usually a suitable filler for the human target slot when the semantic type of the object is unknown) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "We expect to eventually use a classifier at this stage of processing . This is especially appropriate for template slots with a set list of possible fillers, e .g . perpetrator confidence, category of incident, etc .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "Here is the first template generated by PLUM for message 99 in the TST1 corpus : Several things were processed correctly here : \u2022 we correctly identified the nature of the attack, the identity of the attacking individuals, and the identity and type of the target, an d \u2022 we correctly determined the nature of the damage, including the negation in \"NO INJURIES\" . However, several points were missed : \u2022 we failed to understand \"TONIGHT\", and so filled in the default of some time before the header date;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE",
"sec_num": null
},
{
"text": "\u2022 the identity of the terrorist organization was missed because our strategy for looking for perpetrators was to o inflexible and did not keep looking once \"TERRORISTS\" was found ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE",
"sec_num": null
},
{
"text": "\u2022 our system does not yet attempt to fill the foreign target slot, so naturally we missed that filler ; and \u2022 our semantics for locations are too limited, listing only the town of San Isidro (which is in El Salvador) and not the neighborhood of San Isidro (which is in Lima, Peru) . There is a reference to Lima ; the syntactic structure assigned, however, does not permit the proper semantics to identify it as a location .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Toward Understanding Text with a Very Large Vocabulary",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Ayuso",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bobrow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Maclaughlin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayuso, D .M., Bobrow R ., MacLaughlin, D ., Meteer, M., Ramshaw, L ., Schwartz, R. and Weischedel, R . Toward Understanding Text with a Very Large Vocabulary. In Proceedings of the Speech and Natural Language Workshop, Morgan-Kaufmann Publishers, Inc . June, 1990 .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the Second Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K . A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143 . ACL, 1988.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Common Facts Data Base",
"authors": [
{
"first": "W",
"middle": [],
"last": "Crowther",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "89--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crowther, W. A Common Facts Data Base. In Proceedings of the Speech and Natural Language Workshop, pages 89-93 . Morgan Kaufmann Publishers Inc ., San Mateo, CA, February 1989 .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parsing the LOB Corpus",
"authors": [
{
"first": "C",
"middle": [
"G"
],
"last": "De Marcken",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 28th Annual Meeting of the Association fo r Computational Linguistics",
"volume": "",
"issue": "",
"pages": "243--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "de Marcken, C .G . Parsing the LOB Corpus . Proceedings of the 28th Annual Meeting of the Association fo r Computational Linguistics, pages 243-251 . 1990 .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "First Steps Towards an Annotated Database of America n English",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Magerman",
"suffix": ""
}
],
"year": 1990,
"venue": "Readings for Tagging Linguistic Information in a Text Corpus Langendoen and Marcus, tutorial for the 28th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M ., Santorini , B ., and Magerman, D . 1990, \"First Steps Towards an Annotated Database of America n English\" Readings for Tagging Linguistic Information in a Text Corpus Langendoen and Marcus, tutorial for the 28th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annotation Manual for the Penn Treebank Project",
"authors": [
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santorini, B . Annotation Manual for the Penn Treebank Project. CIS Department . University of Pennsylvania . May 1990 .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Proceedings of the Fourth DARPA Speech and Natural Language Workshop",
"authors": [
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Ayuso",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bobrow",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Palmucci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parsing",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel, R ., Ayuso, D . M., Bobrow, R ., Boisen, S ., Ingria, R., and Palmucci, J . Partial Parsing, A Report on Work in Progress, Proceedings of the Fourth DARPA Speech and Natural Language Workshop, February 1991 .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Empirical Studies in Part of Speech Labelling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Fourth DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel, R ., Meteer, M ., and Schwartz, R ., Empirical Studies in Part of Speech Labelling, Proceedings of the Fourth DARPA Speech and Natural Language Workshop, February, 1991 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "PLUM System Architecture",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The semantic component encompasses both lexical semantics and semantic rules. The semantic lexicon is separate from the parser's lexicon and has much less coverage . At present it contains the following numbers o f entries:Lexical semantic entries typically include a domain model concept, as well as predicates pertaining to it. Fo r example, here is the lexical semantics for the verb BOMB :",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "PERPETRATOR : ID OF INDIV(S) \"TERRORISTS \" 6. PERPETRATOR : ID OR ORG(S ) 7. PERPETRATOR CONFIDENC E 8. PHYSICAL TARGET : ID(S) \"THE EMBASSIES \" 9. R1JMSICAL TARGET : TOTAL PLURAL 10 . PHYSICAL TARGET : TYPE(S) DIPLOMAT OFFICE OR RESIDENCE : \"THE EMBASSIES \" 11 . HUMAN TARGET : ID(S ) 12 . HUMAN TARGET : TOTAL NU M 13 . HUMAN TARGET: TYPE(S ) 14. TARGET : FOREIGN NATION S 15. INSTRUMENT: TYPE(S) * 16 . LOCATION OF INCIDENT EL SALVADOR : SAN ISIDRO (TOWN ) 17 . EFFECT ON PHYSICAL TARGET SOME DAMAGE : \"THE EMBASSIES \" 18 . EFFECT ON HUMAN TARGET NO INJURY : \"-\"",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}