{ "paper_id": "1993", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:34:17.246746Z" }, "title": "Probabilistic Incremental Parsing in Systemic Functional Grammar", "authors": [ { "first": "A", "middle": [ "Ruvan" ], "last": "Weerasinghe", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Unit", "institution": "University of Wales College of Cardiff", "location": { "country": "UK" } }, "email": "" }, { "first": "Robin", "middle": [ "P" ], "last": "Fawcett", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Unit", "institution": "University of Wales College of Cardiff", "location": { "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we suggest that a key feature to look for in a successful parser is its abil ity to lend itself naturally to semantic inter pretation. We therefore argue in favour of a parser based on a semantically oriented model of grammar, demonstrating some of the bene fits that such a model offers to the parsing pro cess. In particular we adopt a systemic func tional syntax as the basis for implementing a chart based probabilistic incremental parser for a non-trivial subset of English.", "pdf_parse": { "paper_id": "1993", "_pdf_hash": "", "abstract": [ { "text": "In this paper we suggest that a key feature to look for in a successful parser is its abil ity to lend itself naturally to semantic inter pretation. We therefore argue in favour of a parser based on a semantically oriented model of grammar, demonstrating some of the bene fits that such a model offers to the parsing pro cess. In particular we adopt a systemic func tional syntax as the basis for implementing a chart based probabilistic incremental parser for a non-trivial subset of English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\u2022 On leave from the Department of Statistics and Computer Science, University of Colombo, Sri Lanka. (Fawcett et al ., 1993; Matthiessen, 1991) .", "cite_spans": [ { "start": 101, "end": 124, "text": "(Fawcett et al ., 1993;", "ref_id": "BIBREF6" }, { "start": 125, "end": 143, "text": "Matthiessen, 1991)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "The majority of the research in the field of Natural Language Understanding (NLU) is based on models of grammar which make a clear distinction between the levels of syn tax and semantics. Such models tend to be strongly influenced by formal linguistics in the general Chomskyan paradigm, and/ or by mathematical formal language theory, both of which make them conducive to computer im plementation. Essentially, these models con stitute an attempt to 'stretch' techniques that", "sec_num": null }, { "text": "The semantic orientation of functional grammars, however, is to some extent in con flict with the better understood techniques for parsing syntax. The research described in 1 It is evident , however, that researchers working in the formal . linguistics paradi gm have in recent years increasingly realized the importance of the functional aspects of language, e.g. in au gm enting their models with syntactico-semantic features. this paper presents a probabilistic approach advantage that it addresses the problems of to parsing that yields a rich . syntax, using a maintainability and consistency of the gram systemic functionaJ gramrp.ar (SFG).In qoing mar (as used by b. oth the generator . and the so, however, it also shows how some of the parser) , but it runs into problems of search techniques used in traditional syntax parsing , space, and suffers from the limitation that the can be adapted to serve as useful tools for ' information is extracted from artificial random the problem. It will be shown that our parser generation. is able to produce richly annotated flat' parse The current parser is an attempt to over trees that are particularly well-suited to higher come the latter problem -but not at the ex level processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "pense of the former. The major emphases of The main contributions to the formal spec-the parser therefore can be stated as follows: ification of SFG, as they affect NL U, have been by Patten and Ritchie (1986 ), Mellish (1988 ), Patten (1988 , Kasper (1987) and Brew ( 1991) . These have mainly been con cerned with the reverse traversal of system networks\u2022 in order to get at the features from the items (words). They all conclude that sys temic classification is NP-hard, but seek to iso late tractable sub-networks in order to be able to optimjse reversal. It is thus apparent that a direct reverse traversal of the networks may not be the best approach to parsing in SFG.", "cite_spans": [ { "start": 184, "end": 208, "text": "Patten and Ritchie (1986", "ref_id": "BIBREF18" }, { "start": 209, "end": 225, "text": "), Mellish (1988", "ref_id": "BIBREF13" }, { "start": 226, "end": 241, "text": "), Patten (1988", "ref_id": "BIBREF17" }, { "start": 244, "end": 257, "text": "Kasper (1987)", "ref_id": "BIBREF7" }, { "start": 262, "end": 274, "text": "Brew ( 1991)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "Work of a more implementational nature is reported in Kasper (1988 ), Atwell et al. (1988 and Matthiessen (1991) . The common ap proach to parsing systemic grammar in these has been to employ a 'cover grammar' for pre processing the syntactic structure of the input string (instead of attempting to directly re verse the networks and realization rules), and then, as a second stage, to do the semantic interpretation by accessing the features con tained in the system networks .. O'Donoghue (1991b) suggests one possible way to avoiding this, namely by the use of a 'vertical strips parser'. This extracts the 'syntax rules' that are implicit in the grammar through analysing a corpus of text randomly generated by the generator (GENESYS 2 ). His approach has the 2 GENESYS is the main generator in the COMMU NAL project; It stands for GENErate SYStemically ;COMMUNAL stands for the Convivial Man Machine ... Using NAtural Language, and is a DRA sponsored 1. To maintain a close correspondence be tween the syntactic representation and the semantic representation which is to be extracted from it (this havi\ufffdg implica tions for possible interleaved processing).", "cite_spans": [ { "start": 54, "end": 66, "text": "Kasper (1988", "ref_id": "BIBREF8" }, { "start": 67, "end": 89, "text": "), Atwell et al. (1988", "ref_id": "BIBREF0" }, { "start": 94, "end": 112, "text": "Matthiessen (1991)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "2. To obtain a syntactically and functionally rich parse tree ( even when there is-some ungrammaticality in the\u2022 input).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "3. To improve efficiency by ( a) parsing incre mentally and (b) guiding the parsing pro cess by probability based prediction and the use of feature unification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "To this end we reject the strategy of adopt ing a PSG-type 'cover grammar', in the style of Kasper (1988) and adopt instead a systemic syntax as the basis of the parser. \u2022This is stored in the form of 1. Componence, filling and exponence ta bles, as described in section 2.3 and", "cite_spans": [ { "start": 92, "end": 105, "text": "Kasper (1988)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "have been applied successfully to parsing arti ficial ( and so unambiguous) languages, in or der to apply them to natural language (NL). In recent years, however, models of language that are derived from the text-descriptive tra dition in linguistics have emerged as poten tially relevant to NL U. These models empha size the semantic and fu nctional richness of language rather than its more formal and syn tactic properties. Such models may challenge widely held assumptions, e.g. that the key notion in modelling syntax is grammaticality, and that this is to be modelled using some ver sion of the concept of a phrase structure gram mar (PSG) .Since such grammars emerge from use in analysing texts, they have something in common with the sort of grammars that tend to be used in corpus linguistics. To date the strongest influence of these grammars has been in Natural Language Generation (NLG)", "sec_num": "1" }, { "text": "The output of the parser's incremental eval uation of the parse can be exploited by a se mantic interpreter of the kind described by O'Donoghue (1991a O'Donoghue ( , 1993 ; see also Fawcett project at the University of Wales College of Cardiff, UK. (1993) . Essentially, this runs the system networks in reverse to collect the features required 3 .", "cite_spans": [ { "start": 133, "end": 150, "text": "O'Donoghue (1991a", "ref_id": "BIBREF14" }, { "start": 151, "end": 170, "text": "O'Donoghue ( , 1993", "ref_id": "BIBREF16" }, { "start": 249, "end": 255, "text": "(1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities of \u2022 the ele ments in the componence tables", "sec_num": "2." }, { "text": "In the rest of this paper we will introduce the concept of 'rich syntax' with respect to SFG (section 2), and then describe the tech niques we adopt for parsing it (section 3). Fi nally, in section ref conclusions we will evaluate the work done so far and discuss its limitations and future work envisaged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities of \u2022 the ele ments in the componence tables", "sec_num": "2." }, { "text": "2 Parsing fo r rich syntax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities of \u2022 the ele ments in the componence tables", "sec_num": "2." }, { "text": "Before we describe the nature of systemic func tional syntax, we need to \u2022 point out that the syntactic structures ( to be discussed in sec tion 2.2 ) are not the heart of the grammar, but the outputs from the operation of the SYS TEM NETWORKS and their associated REAL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "IZATION RULES 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "SFG is a model of grammar developed from a functional view of language which has its roots in the work of Firth and the Prague School; Its major architect is Halli day. The more well known computer imple mentations of it have been developed mainly in the complementary field of Natural Lan guage Generation (NLG). Some of these in clude Davy's PROTEUS(1978) , Mann and Matthiessen's NIGEL(1985) and Fawcett and Tucker's GENESYS(1990) . One significant NL U system based upon systemic syntax is Winograd's SHRDLU(1972) .", "cite_spans": [ { "start": 337, "end": 357, "text": "Davy's PROTEUS(1978)", "ref_id": null }, { "start": 360, "end": 394, "text": "Mann and Matthiessen's NIGEL(1985)", "ref_id": null }, { "start": 399, "end": 433, "text": "Fawcett and Tucker's GENESYS(1990)", "ref_id": null }, { "start": 494, "end": 517, "text": "Winograd's SHRDLU(1972)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "The core of the grammar consists of a great many choice points, known as systems 5 , at 3 An alternative would be to have a separate compo sitional semantics component based on the fu nctional paradi gm described in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "4 Readers familiar with how a systemic functional grammar works may wish to skip this section. 5 For these, and for an overview of the role of SFG 3 each which the system must take one path or another by choosing one of two ( and some times more) semantic features. Quite large numbers of such systems combine, using a small set of AND and OR operators, to form a large network, as shown in figure 1. The big lexicogrammar which the parser described here is designed to work with has about 600 grammatically realized systems. The network is traversed from left to right, and each such traversal generates a 'selection expression ' (i.e. a bundle) of features. Some of these have at tached to them REALIZATION RULES, and it is these which, one by one, combine to build the semantically motivated 'syntax' structures that we shall describe in section 2.2. For example, consider the fragment of a net work shown in figure 1 below 6 \u2022 It shows a sim plified version of the current network in the 'midi' version of the COMMUNAL grammar. What is not shown here is a detailed formal specification of the realization rules for the fea tures collected by following the various possi ble pathways through the network. The first two realizations are however expressed infor mally: i.e. the meaning of [information] plus [giver] is realized by having the Subject (S) be fore the Operator ( 0), if there is an Operator, and if not by having the Subject before the Main Verb (M).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "It should be noted that in the 'full' grammar there are probabilities attached to each sys tem ( or choice point). This enables the model to escape from the conceptual prison of the concept of grammaticality and enables us to account for very unlikely, yet possible choices being made. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "-{ :=\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 :. : \ufffd:\u2022 \ufffd, confirmation-see k er \u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022 \u2022\u2022 \u2022\u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022 \u2022\u2022 \u2022 \u2022\u2022 \u2022\u2022 \u2022 \u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022 Ha sn 't I v y rea d It? MOOD (others) 1 \u2022 \u2022 ---\u2022 \" ' \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 RadlU ,; mple-di , l , ., . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u00b0\" -..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".1 Systemic Functional Grammar", "sec_num": "2" }, { "text": "The syntactico-semantic analysis produced by the parser differs from traditional syntactic parse trees in at least the following four im portant ways. 2. Secondly, the emphasis on fu nction in the model is evident in the elements which constitute the final non-terminals in the syntax tree, which are categorised in terms of their fu nction in the unit above, rather than as 'word classes'. In this scheme, the term 'noun' for example is a label for one of the classes of words which may expound the head of a ngp. Others may include pronouns, proper names or one(s). Again, very is not treated simply as an other 'adverb' (which misleadingly sug gests that it functions similarly to quickly, etc) , but as a 'temperer'. This is because it typically 'tempers' a quality of either a 'thing', as in (lb) below, or a 'situation', as in (le). It is thus an element of what is here termed a 'quantity-quality group', in which the 'head' element, which expresses the 'quality', is termed the 'apex' (a) and the 'modifier' element, which 'tempers' it by expressing a 'quantity' of that quality, is termed a 'temperer'. This functional enrichment of the syntax provides a nat ural way to account for the difference be tween the functions of very and big in sen tences la and lb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "(la) She noticed the big fat man.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "(lb) She noticed the very fat man.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "(le) She ran very quickly to the window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "Finally, also note that, the grammar al lows some (but not all) elements to be ei ther 'expounded' by lexical or grammat ical items or 'filled; by a syntactic unit. For example, consider the quantifying de terminer ( dq), which is EXPOUNDED in (2) and 3 that specifies the syntax. Instead there are semantic features whose realization rules, col lectively, build up syntactic structure. The in formation that a parser needs to have available to it is only implicitly present in the generator, and it is not in a form that is readily usable by the parser. 0 'Donoghue ( 1991 b) explores one possible way of overcoming this problem, namely by extracting the 'rules' ( or 'legal se quences' ) from sentences randomly generated by the generator (GENESYS). Our approach is to extract from the system networks and re alization rules the information about syntax that is relevant to the work of the parser, and to state it in a form that is more amenable to this task 9 \u2022 The four major types of units recognized by the parser's syntax are the clause (cl) , the nominal group (ngp ), the prepositional group (pgp) and the quantity-quality group (qqgp) 10 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "Of these,four units, the clause has by far the most complex and variable syntax. The ele ments of the ngp, pgp and qqgp on the other hand can be considered for practical purposes to be fixed, and the presence or absence of el ements within such groups is reflected in our model in the transition probabilities (see sec tion 3.1). Because of the fixed sequence of el ements in these groups, we can at this point use a re-write rule notation to represent these.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically motivated sys temic functional aspects of the model", "sec_num": "2.2" }, { "text": "Here, the '-+-' is used to denote the COM PONENCE relationship. Thus, for instance, a pgp can be composed of a preposition (p) and a completive (cv). However, the above specifications have a 9 We hope to be able to device a technique for au tomatic extraction of 'rules' from the system networks in future versions of the parser, but we defer this task for the present as it has been shown to be possible (O'Donoghue, 1991b). 10 We should state here that the syntax described below handles only a subset of the 'midi grammar' contained in the system networks of GENESYS re ferred to above, and that we have nothing to say here about phenomena such as 'raising' and 'long-distance dependency' ( though many aspects of discontinuity are already covered within GENESYS, and these types of phenomena are now being considered in the SFG framework) . 11 The key to the list of elements used in the parser is given in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ngp -+-dq, vq, dd, m, mth, h, qth, qsit pgp -+-P, CV qqgp -+-t, a, f 11", "sec_num": null }, { "text": "(1) those elements that are optional, (2) the degree of optionality, and (3) the dependen cies that may hold between them, absolutely and relatively ( e.g. there can be no vq if there is no dq, and it is highly unlikely that there will be a dq without an h). As we shall see, it is the information about probabilities that captures these facts. The most complex of the groups, the clause, has a more variable potential structure which here we denote ( for convenience) by the re write rule 12 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "number of grave limitations. They fail to show", "sec_num": "7" }, { "text": "As we shall shortly see, the information about adjacent elements expressed in these specifications, together with other vital in formation on optionality and probabilities, is made available to the parser in a somewhat different form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "A second type of information required by the parser is a set of statements about FILL ING, i.e. about the elements which units can fill, thus 13 Note that in this analysis, the fragment the elements which these units can fill both corre sponds to a single nominal group (filling the element of Subject and the participant role of Affected) and constitutes a single semantic 'thing' (or 'object').", "cite_spans": [ { "start": 142, "end": 144, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "We would argue, with Winograd ( e.g. Winograd, 1972) , that such 'flat' tree rep resentations lend themselves more naturally to higher level processing than do trees with many branchings, because each layer of struc ture corresponds to a a semantic unit, and ulti mately .to a unit in the 'belief' representation.", "cite_spans": [ { "start": 37, "end": 52, "text": "Winograd, 1972)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "1 4 See appendix for 'conflation' abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "There is no genuine equivalent relationship to this in a PSG, because such grammars do not have the 'double labelling' of nodes in the tree as both element and unit (or, with coordi nation, units) described above. That is, there is no distinction between componence (unit down to element) and filling ( element down to unit(s)). From the viewpoint of a parser, the relationship we are considering here is a unit-up-to-element table. Here the probabilis tic information is extremely valuable; it is use ful for the parser to know, for example, that it is relatively unusual for a clause to fill a Sub ject, but that a clause fairly frequently fills a Complement or Adjunct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "We have been considering the 'unit up to 3 How the parser works", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "0 < may N < not X < have X < been M < seeing C21 ngp -h < them A lq qgp -a < recently E (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "cl -+-B, A* , C2, 0# ,S, 0#, N, I, A* , X* , M, Cm, Cl, C2, Cm, A*", "sec_num": null }, { "text": "The fundamental strategy adopted in parsing for the rich functional syntax described in sec tions 2.2 and 2.3, is an adapted form of bot tom up chart parsing with limited top down prediction. One of the main reasons for the adaptation of the chart parsing algorithm is to account for some of the context sensitiv ity exhibited by the SF syntax. For example, the possibility of an 'Operator' occurring after a Subject is dependent on it's non-occurrence before it. Similarly, certain types of Adjunct are mutually exclusive within a given clause. For this reason, our parser has lists of 'poten tial structure' templates ( as shown in simpli-fled form in section 2.3) instead of the usual CF-PSG type rules. These are' augmented by the element transition probability tables and a probabilistic lexicon, to assist the adapted probabilistic chart parser implemented here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The basic algorithm", "sec_num": "3.1" }, { "text": "Hence, the chart is composed of edges, each with a list of the elements that can 'poten tially' occur following it, together with option ality and mutual exclusivity constraints, fea tures associated with the edge and a unique probability score representing its likelihood of occurrence. As in the case of standard 'active' chart parsers, 'active' or hypothesis bearing edges too are maintained in a similar way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The basic algorithm", "sec_num": "3.1" }, { "text": "The unification of edges is used only to per form a 'percollatory' fu nction rather than a 'restrictive' one, so as to give less importance to the concept of 'grammaticality'. The aim of this is to allow some 'ungrammaticality' in order to extract at least some meaning from any utterance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The basic algorithm", "sec_num": "3.1" }, { "text": "The probabilities themselves are calculated from three sources: ( the 'lexicon' ).", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 79, "text": "( the 'lexicon'", "ref_id": null } ], "eq_spans": [], "section": "The basic algorithm", "sec_num": "3.1" }, { "text": "2. The filling probabilities for each unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The item probabilities contained m the exponence table", "sec_num": "1." }, { "text": "In a given application of the 'fundamen tal rule', three component probabilities are used in working out a weighted geometric mean. It is our observation that, as Mager man and Marcus (1991) point out, joint prob abilities calculated as products are not accu rate estimates of such likelihoods owing to the events considered violating the independence assumption. The three probabilities thu\ufffd af fecting the new edge created are :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "1. The probability of the 'active' edge in the 'attachment'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "where the above fragment begins the input string, the clause level transition\u2022 probabilities will heavily favour the S to be the element be ing filled by the ngp ( with a high score for the transition ($,S)) than C or cv.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "Consider the following example sentence to see how such a probabilistic model can assist in arriving at a correct analysis of a clause with lexical ambiguity : ( 11) Did you notice him?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "In this example, though notice could.be both a head (noun) or a verb, the transition prob ability of head-head is very low . (Noun-noun compounding will not score well as the prob ability of you being able to fill a modifier is negligible). On the other hand, the transition probability of S-M is very high and so notice will be parsed as a M in the leading 'theories'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "Finally, consider the following 'garden path' type sentence to see how our probabilistic model copes with this type of ambiguity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "{12) The cast iron their clothes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "According to the COBUILD dictionary, cast is most commonly a noun(h) or a verb(M) while iron is most commonly a noun(h), but could also be a verb(M) and more rarely an adjective(a) . A part-of-speech tagger encoun tering this input string will need to determine which of the transitions ( dd,h,h), ( dd,h,a ( dd,h,h ), the for mer as it could have information about cast and iron being able to follow each other in this way and the latter to account for noun-noun compounding.", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 306, "text": "( dd,h,h), ( dd,h,a", "ref_id": null }, { "start": 307, "end": 315, "text": "( dd,h,h", "ref_id": null } ], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "Our parser on the other hand, though ini tially favouring this theory like the class-based tagger, will also advance the theory contain ing iron as a main verb{M). Once a certain 'height' of the parse tree is reached \u2022 however, the probability score of theories treating\u2022 iron as a noun(h) will diminish while those treating it as a verb(M) will be re-inforced by the high transition probabilities of the higher elements (S,M) and (M,C) .", "cite_spans": [ { "start": 421, "end": 426, "text": "(S,M)", "ref_id": null }, { "start": 431, "end": 436, "text": "(M,C)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "It is the availability of these 'higher' func tional syntax level transition probabilities to the parser, that we suspect will enable our parser to perform better than ( conventional) pure probabilistic part-of-speech level models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The transition probabilities between ele ments within a unit.", "sec_num": "3." }, { "text": "A major secondary goal of our work is to build a parser which can function as the front end to a complete interactive NLP system (COM MUN AL) . To this end we have developed an interactive interface to the parser. Incoming items are tagged to focus the search space us 'interpretive' stages of analysis because of the well annotated 'flat' parse trees produced and their (near) one-to-one correspondence with semantic objects in the SFG adopted. (See Van \u2022 der Linden (1992, p. 225) for reasons why traditional PSG-type grammars cannot in gen eral be parsed incrementally {n thi\ufffd way).", "cite_spans": [ { "start": 129, "end": 141, "text": "(COM MUN AL)", "ref_id": null }, { "start": 446, "end": 482, "text": "(See Van \u2022 der Linden (1992, p. 225)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The interactive interface", "sec_num": "3.2" }, { "text": "As an example, consider the following sen tence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The interactive interface", "sec_num": "3.2" }, { "text": "(13) The boy with long hair saw Jill in the park.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The interactive interface", "sec_num": "3.2" }, { "text": "Here, as soon as the user starts to input Jill, the item saw is tagged, with its syntactic context guiding the decision. Meanwhile, The boy with the long hair has already been iden tified as a nominal group (unit) with certain ( quite limited) semantic properties, and it is thus ready for verification as, say, {person102) very early in the parse process. 3. Flagging abbreviations appropriately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The interactive interface", "sec_num": "3.2" }, { "text": "The 'final non-terminals' output by this rou tine are input to the parser incrementally, while simultaneously accepting further input. Thus by the time the user input is completed (by the tagger encountering an 'Ender' item) much of it has already been analysed by the parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Signalling unknown words or assigning likely elements which . might expound them", "sec_num": "4." }, { "text": "The incremental nature of processing at this syntax level can be further exploited at higher", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Signalling unknown words or assigning likely elements which . might expound them", "sec_num": "4." }, { "text": "It is necessary firstly to evaluate our parser with respect to the richly annotated functional parse it produces. While time and space effi ciency issues of the algorithm have not been brought to bear too heavily on the work done, the techniques adopted are general. enough to be used for parsing other functional grammars represented as 'structural templates' ( and sup plemented by fe atures and transition probabil ities, and filling and exponence tables), with minimum modification to the algorithm itself. The information contained in the flat parse trees constructed by the parser, while being richer in content, also allow for \u2022 natural in terleaving of syntax with higher semantic and pragmatic processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation as a general fu nc tional parser", "sec_num": "4.1" }, { "text": "In this sense, we consider the current parser to be a successful precursor to a fully proba-bilistic chart parser for functionally rich gram mars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation as a general fu nc tional parser", "sec_num": "4.1" }, { "text": "More detailed formal evaluation, both of time-space efficiency of the algorithm and the parser's accuracy in analysing free text needs to be postponed for the present, until the parser is 'trained' on the fully systemically (hand) parsed Polytechnic of Wales (POW) corpus 16 . At the time of writing, a tool for the extraction of the necessary probabilities has been implemented (Day, 1993) , though it needs as yet to be linked to the parser's prob ability module.", "cite_spans": [ { "start": 379, "end": 390, "text": "(Day, 1993)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation as a general fu nc tional parser", "sec_num": "4.1" }, { "text": "Though the general algorithm is concerned with text parsing, our specific area of appli cation is to use the parser as a front-end to the COMMUNAL NLP system, which is al ready equipped with a large systemic func tional grammar embodied in its generator GENESYS. For this reason, the parser is equipped with an interactive interface which acts on input in an incremental way. It is also able to achieve a significant coverage of the 'midi' version of the GENESYS grammar. Our thesis is that this prototype parser will lend itself to being substantially extended to cover other complex grammatical phenomena handled by the 'full' version of the grammar, without the need to make any major alter ations to the techniques employed in it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation as a front-end to COMMUNAL", "sec_num": "4.2" }, { "text": "One of the main limitations of the integrity of the system is that of the parser need ing to be manually supplied with grammati cal information embodied within the genera-tor, GENESYS. An urgent need therefore is for a technique for extracting this information directly without human intervention. This would enable any grammar represented in sys tem network notation to be compiled into a parsable form. The main source of lexical probabilities for the parser has been West (1965) , while el ement transition probabilities have been ex tracted (using the aforementioned interactive tool) from the POW corpus. For a more con sistent approach non-reliant on human inter vention, more work is needed on developing a non-interactive version of the parser which is able to train on hand parsed corpora.", "cite_spans": [ { "start": 470, "end": 481, "text": "West (1965)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and fu rther work", "sec_num": "4.3" }, { "text": "The improvement of these aspects of the sys tem will allow the current parser to be used as a robust 'real text' parser and to be incorpo rated into a NL U system capable of true inter leaved processing. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and fu rther work", "sec_num": "4.3" }, { "text": "Note that the use of features for constraining the parse forrest using for instance number agreement is not done here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, * denotes recursive elements while # denotes that the Operator element can be 'conflated' with the fu nctions of X (auxiliary) or M (main verb) .13 Note that B, 0, N, I, X and Cm are directly ex pounded by items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that at this stage the parser does not yet assi gn participant roles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is available from ICAME's archive at the Norwegian Computing Centre for the Humanities in Bergen, Norway.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "2. The probability of the 'inactive' edge of the 'attachment' . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "In this situation, the two edges the and man would invoke the hypothesis ( using the usual 'chart' notation): ngp \ufffd dd . h where the ngp is 'looking for' a head. In the ensuing unification of this active edge with h{man), we consider the probabilities of : 1. the active edge ngp( dd{ the)) 2. the inactive edge h{man) and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4: Edge creation with probabilities", "sec_num": null }, { "text": "A geometric mean of the probabilities of 1 and 2 and a weighted 3 is attached to the new (inactive) edge ngp{dd{the), h{man)), that is thus formed. The weighting on the third com ponent makes it favour the transition predic tions over those of filling and exponence.Subsequently, the filling probabilities of S, C, cv etc. will be considered. In the case", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "the transition ( dd,h).", "sec_num": "3." }, { "text": "Also Known As ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Symbol Name", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Prototype parser 1", "authors": [ { "first": "E", "middle": [ "S" ], "last": "Atwell", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Souter", "suffix": "" }, { "first": "T", "middle": [ "F" ], "last": "Donoghue", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atwell, E. S., Souter, C. D., & O'Donoghue, T. F. (1988). Prototype parser 1. Tech . rep. 17, Computational Linguistics Unit, University of Wales College of Cardiff, UK.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Systemic classification and its efficiency", "authors": [ { "first": "C", "middle": [], "last": "Brew", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brew, C. (1991). Systemic classification and its efficiency. Computational Linguistics, 17(4) .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discourse production: A computer model . of some aspects of a speaker", "authors": [ { "first": "A", "middle": [], "last": "Davy", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davy, A. (1978). Discourse production: A computer model . of some aspects of a speaker. Edinburgh University Press, Edinburgh, UK.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The interactive cor pus query facility: a tool for exploiting parsed natural language corpora", "authors": [ { "first": "M", "middle": [ "D" ], "last": "Day", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Day, M. D. (1993). The interactive cor pus query facility: a tool for exploiting parsed natural language corpora. Mas ter's thesis, University of Wales College of Cardiff, UK.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A generationist ap proach to grammar reversibility in natu ral language processing", "authors": [ { "first": "R", "middle": [ "P" ], "last": "Fawcett", "suffix": "" } ], "year": 1993, "venue": "Reversible Grammar in Nat ural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fawcett, R P. \u2022(1993). A generationist ap proach to grammar reversibility in natu ral language processing. In Strzalkowski, T. (Ed.), Reversible Grammar in Nat ural Language Generation. Dordrecht: Kluwer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Demonstration of genesys: a very large, semantically based systemic functional grammar", "authors": [ { "first": "R", "middle": [ "P" ], "last": "Fawcett", "suffix": "" }, { "first": "G", "middle": [ "H" ], "last": "Tucker", "suffix": "" } ], "year": 1990, "venue": "The 13th International Conference on Computational Linguis tics (COLING-90)", "volume": "", "issue": "", "pages": "47--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fawcett, R. P., & Tucker, G. H. (1990). Demonstration of genesys: a very large, semantically based systemic functional grammar. In The 13th International Conference on Computational Linguis tics (COLING-90), pp. 47-49.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "How a systemic functional gram mar works: the role of realization in re alization", "authors": [ { "first": "R", "middle": [ "P" ], "last": "Fawcett", "suffix": "" }, { "first": "G", "middle": [ "H" ], "last": "Tucker", "suffix": "" }, { "first": "Y", "middle": [ "Q" ], "last": "Lin", "suffix": "" } ], "year": 1993, "venue": "New Concepts in Natural Lan guage Generation: Planning, Realiza tion and Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fawcett, R. P., Tucker, G. H., & Lin, Y. Q. ( 1993). How a systemic functional gram mar works: the role of realization in re alization. In Horacek, H., & Zock, M. (Eds.), New Concepts in Natural Lan guage Generation: Planning, Realiza tion and Systems. Pinter, London.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A unification method for disjunctive feature descriptions", "authors": [ { "first": "R", "middle": [ "T" ], "last": "Kasper", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 25th Annual Meeting of the Association of Computational Lin guistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kasper, R. T. ( 1987). A unification method for disjunctive feature descriptions. In Proceedings of the 25th Annual Meeting of the Association of Computational Lin guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An experimental parser for systemic grammars", "authors": [ { "first": "R", "middle": [ "T" ], "last": "Kasper", "suffix": "" } ], "year": 1988, "venue": "The 12th In ternational Confere nce on Computa tional Linguistics {COLING-88}", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kasper, R. T. ( 1988). An experimental parser for systemic grammars. In The 12th In ternational Confere nce on Computa tional Linguistics {COLING-88}.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pearl: A probabilistic chart parser", "authors": [], "year": null, "venue": "Procedings of the 2nd International Workshop on Parsing Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pearl: A probabilistic chart parser. In Procedings of the 2nd International Workshop on Parsing Technologies.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Nigel: A systemic grammar for text generation", "authors": [ { "first": "W", "middle": [ "C" ], "last": "Mann", "suffix": "" }, { "first": "C", "middle": [], "last": "Matthiessen", "suffix": "" } ], "year": 1985, "venue": "Systemic Per spectives on Discourse. Ablex", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mann, W. C., & Matthiessen, C. (1985). Nigel: A systemic grammar for text generation. In Freedle, R. 0. (Ed.), Systemic Per spectives on Discourse. Ablex.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Test genera tion and systemic fu nctional linguistics", "authors": [ { "first": "C", "middle": [ "M I M" ], "last": "Matthiessen", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthiessen, C. M. I. M. (1991). Test genera tion and systemic fu nctional linguistics. Pinter, London.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Implementing systemic classification by unification", "authors": [ { "first": "C", "middle": [ "S" ], "last": "Mellish", "suffix": "" } ], "year": 1988, "venue": "Computa tional Linguistics", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mellish, C. S. (1988). Implementing systemic classification by unification. Computa tional Linguistics, 14 (1).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A semantic inter preter for systemic grammars", "authors": [ { "first": "T", "middle": [ "F" ], "last": "O'donoghue", "suffix": "" } ], "year": 1991, "venue": "Pro ceedings of the AGL Workshop on Re versible Grammars", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Donoghue, T. F. (1991a) . A semantic inter preter for systemic grammars. In Pro ceedings of the AGL Workshop on Re versible Grammars.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The vertical strip parser : a lazy approach to parsi_ ng. Re port 91", "authors": [ { "first": "T", "middle": [ "F" ], "last": "O'donoghue", "suffix": "" } ], "year": 1991, "venue": "", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Donoghue, T. F. (1991b). The vertical strip parser : a lazy approach to parsi_ ng. Re port 91.15, School of Computer Studies, University of Leeds, UK.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reversible Grammar in Natural Language Generation", "authors": [ { "first": "T", "middle": [ "F" ], "last": "O'donoghue", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O'Donoghue, T. F. (1993). Semantic interpre tation in a systemic grammar. In Strza lkowski, T. (Ed.), Reversible Grammar in Natural Language Generation. Dor drecht: Kluwer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Systemic Text Generation as Problem Solving", "authors": [ { "first": "T", "middle": [], "last": "Patten", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patten, T. (1988). Systemic Text Generation as Problem Solving. Cambridge Univer sity Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "_A formal model of systemic grammar", "authors": [ { "first": "T", "middle": [], "last": "Patten", "suffix": "" }, { "first": "G", "middle": [], "last": "Ritchie", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patten, T., & Ritchie, G. (1986). _A formal model of systemic grammar. Research paper 290, Deptartment of AI, Edin butgh University, UK.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Incremental processing and the hierarchical lexicon", "authors": [ { "first": "E.-J", "middle": [], "last": "Van Der Linden", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van der Linden, E.-J. (1992). Incremental processing and the hierarchical lexicon. Computational Linguistics, 18(2).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A general service list of en glish words", "authors": [ { "first": "M", "middle": [], "last": "West", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "West, M. (1965). A general service list of en glish words. Longmans.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Understanding Natural Language", "authors": [ { "first": "T", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Winograd, T. (1972). Understanding Natural Language. Academic Press Inc.", "links": null } }, "ref_entries": { "FIGREF2": { "text": "A simplified system network showing some of the meaning potential for MOOD in English ( excluding much, e.g. POLARITY and realizations in tags and intonation)", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "by the ITEMS a and one, and FILLED by the nominal group (UNIT) 2 above, reflect and respect a specific commitment to maintaining the closest possible correlation between the units recognized in syntax and those rec ognized in semantics. Thus, in the un marked case, a CLAUSE ( cl) realizes a SIT UATION ( = roughly 'proposition') and a NOMINAL GROUP (ngp) realizes a THING ('object'). Hence our parser produces broad flat trees rather than those with multiple ( often binary) branching; the 'work done' in models of the latter sort by the branching is done in our model by richer labelling. ie. Richer labelling re duces the p.eed to represent relations by distinctive tree\u2022 structure variations, thus enabling tis to restrict the branching to the definition of units that are semanti cally motivated. And this in turn greatly facilitates the transfer of the output from the parse to the next stage of processing.3. Third, we also consider that the notion of absolute grammaticality, which is in trinsic to phrase structure type grammars to be fundamentally misleading. Instead, we take a probabilistic approach to the question of what element may ( or is un likely to) follow what other element in a unit. The concept of ungrammatical ity is thus simply one end of a contin uum of probabilities from 0% to 100%. In this respect, our parser has characteristics in common with stochastic approaches to parsing, and so embodies, in effect, a hy brid model. Hence our parser accommodates some measure of ungrammaticality in the in put text, and tries to extract whatever functional information it can from itrather than rejecting it.Consider the ex ample sentence below.", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Thus for instance, the features [manner] , [place] and [time] are 'percolated' up from the items unexpectedly, to Cardiff and on Fri day respectively in (sales, in spite of the recession, was As will be evident from what has been said so up by more than five per cent . far, there is no set of PSG-type re-write rules The type of unification .parser which en forces subject-verb agreement would sim ply return the verdict 'ungrammatical' on encountering such an utterance. Chart based parsers are a slight improvement, in that they would output the 'analysable' fragments of the sentence. Because our parser's goal is to return some semanti cally plausible interpretation, it returns a well formed parse tree out of such input 7 . 7 We take the view that such grammatical features are in fact not normally of any . great help in dis ambiguation, and hence not of much use in fu rther processing.", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "A Systemic Functional Analysis of a sentence the typical SFG analysis of a sentence (Z) shown in figure 2 14 .", "type_str": "figure", "uris": null, "num": null }, "FIGREF7": { "text": "Two of the very tall men who worked in my office have left.", "type_str": "figure", "uris": null, "num": null }, "FIGREF8": { "text": "Some examples of the output of the parser element' :relationship . of componence. Finally, there is the similar 'item up to element' rela tionship of EXPONENCE. This is a list of all the items (roughly, 'words') to be accepted by the parser , with the probabilities -for each sense of each word -of what elements each may ex pound (placed in order). The difference from the previous information source is that it is a very large and constantly modifiable compo nent; the coverage of the unit up to element tables is in comparison quite limited ( and less liable to revision in the light of successfully parsed new texts). This third component is therefore the equivalent in our parser of what is often termed the 'lexicon'. As is shown by the example in figure 2, the SFG approach makes possible the output of syntactically rich, semantically oriented 'flat' tree parse structures. The typical outputs from the parser shown in figure 3 should, it is hoped, give a picture of the kind of data covered by the syntax, and so by the parser 15 .", "type_str": "figure", "uris": null, "num": null }, "FIGREF9": { "text": ") , ( dd,h,M), ( dd,M,h) etc. are more likely. A tagger based on lexical co-occurrences or part of-speech might well favour", "type_str": "figure", "uris": null, "num": null }, "FIGREF10": { "text": "ing a character reading input routine, which is responsible for providing (incrementally) the 4 parser with a 'clean' input by", "type_str": "figure", "uris": null, "num": null }, "FIGREF11": { "text": "Tagging punctuation according to the el ements they expound.2. Handling the syntax of large and decimalnumbers.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "text": "", "content": "
in our systemic syntax) is composed of ELEMENTS ( functional categories)
and
(b) FILLING, whereby such a UNIT 'ful fills' , as it were, the functional role of the element it fills.
1. Firstly, our model of 'syntax' distin guishes between the relationships of : ( a) COMPONENCE, whereby a particular UNIT such as a nominal group ( a 'full' noun phrase; denoted by 'ngp'So, for instance, a ngp can have ( among others) the components deictic deter miner ( dd), modifier (m) and head (h). At the same time it will 'fill' either the element functioning as Subject (S), a Complement (Cl/2), a 'Completive' of a prepositional group, or some ' other ele ment.
", "html": null, "num": null }, "TABREF1": { "type_str": "table", "text": "", "content": "
slngp -h < You
lngp -& < and dd < your h < friend
0/X
", "html": null, "num": null } } } }