{ "paper_id": "A00-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:09.535519Z" }, "title": "A Framework for MT and Multilingual NLG Systems Based on Uniform Lexico-Structural Processing", "authors": [ { "first": "Benoit", "middle": [], "last": "Lavoie", "suffix": "", "affiliation": {}, "email": "benoit@cogentex.com" }, { "first": "Richard", "middle": [], "last": "Kittredge", "suffix": "", "affiliation": {}, "email": "richard@cogentex.com" }, { "first": "Tanya", "middle": [], "last": "Korelsky", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "", "affiliation": {}, "email": "rambow@research.att.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we describe an implemented framework for developing monolingual or multilingual natural language generation (NLG) applications and machine translation (MT) applications. The framework demonstrates a uniform approach to generation and transfer based on declarative lexico-structural transformations of dependency structures of syntactic or conceptual levels (\"uniform lexico-structural processing\"). We describe how this framework has been used in practical NLG and MT applications, and report the lessons learned.", "pdf_parse": { "paper_id": "A00-1009", "_pdf_hash": "", "abstract": [ { "text": "In this paper we describe an implemented framework for developing monolingual or multilingual natural language generation (NLG) applications and machine translation (MT) applications. The framework demonstrates a uniform approach to generation and transfer based on declarative lexico-structural transformations of dependency structures of syntactic or conceptual levels (\"uniform lexico-structural processing\"). We describe how this framework has been used in practical NLG and MT applications, and report the lessons learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we present a linguistically motivated framework for uniform lexicostructural processing. It has been used for transformations of conceptual and syntactic structures during generation in monolingual and multilingual natural language generation (NLG) and for transfer in machine translation (MT). Our work extends directions taken in systems such as Ariane (Vauquois and Boitet, 1985) , FoG (Kittredge and Polgu6re, 1991) , JOYCE (Rainbow and Korelsky, 1992) , and LFS (Iordanskaja et al., 1992) . Although it adopts the general principles found in the abovementioned systems, the approach presented in this paper is more practical, and we believe, would eventually integrate better with emerging statistics-based approaches to MT. * The work performed on the framework by this coauthor was done while at CoGenTex, Inc.", "cite_spans": [ { "start": 257, "end": 262, "text": "(NLG)", "ref_id": null }, { "start": 369, "end": 396, "text": "(Vauquois and Boitet, 1985)", "ref_id": "BIBREF14" }, { "start": 403, "end": 433, "text": "(Kittredge and Polgu6re, 1991)", "ref_id": null }, { "start": 442, "end": 470, "text": "(Rainbow and Korelsky, 1992)", "ref_id": null }, { "start": 481, "end": 507, "text": "(Iordanskaja et al., 1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The framework consists of a portable Java environment for building NLG or MT applications by defining modules using a core tree transduction engine and single declarative ASCII specification language for conceptual or syntactic dependency tree structures 1 and their transformations. Developers can define new modules, add or remove modules, or modify their connections. Because the processing of the transformation engine is restricted to transduction of trees, it is computationally efficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Having declarative rules facilitates their reuse when migrating from one programming environment to another; if the rules are based on functions specific to a programming language, the implementation of these functions might no longer be available in a different environment. In addition, having all lexical information and all rules represented declaratively makes it relatively easy to integrate into the framework techniques for generating some of the rules automatically, for example using corpus-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The declarative form of transformations makes it easier to process them, compare them, and cluster them to achieve proper classification and ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 In this paper, we use the term syntactic dependency (tree) structure as defined in the Meaning-Text Theory (MTT; Mel'cuk, 1988) . However, we extrapolate from this theory when we use the term conceptual dependency (tree) structure, which has no equivalent in MTT (and is unrelated to Shank's CD structures proposed in the 1970s).", "cite_spans": [ { "start": 109, "end": 114, "text": "(MTT;", "ref_id": null }, { "start": 115, "end": 129, "text": "Mel'cuk, 1988)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, the framework represents a generalized processing environment that can be reused in different types of natural language processing (NLP) applications. So far the framework has been used successfully to build a wide variety of NLG and MT applications in several limited domains (meteorology, battlefield messages, object modeling) and for different languages (English, French, Arabic, and Korean).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next sections, we present the design of the core tree transduction module (Section 2), describe the representations that it uses (Section 3) and the linguistic resources (Section 4). We then discuss the processing performed by the tree transduction module (Section 5) and its instantiation for different applications (Section 6). Finally, we discuss lessons learned from developing and using the framework (Section 7) and describe the history of the framework comparing it to other systems (Section 8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The core processing engine of the framework is a generic tree transduction module for lexicostructural processing, shown in Figure 1 . The module has dependency stuctures as input and output, expressed in the same tree formalism, although not necessarily at the same level (see Section 3). This design facilitates the pipelining of modules for stratificational transformation. In fact, in an application, there are usually several instantiations of this module.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "The transduction module consists of three processing steps: lexico-structural preprocessing, main lexico-structural processing, and lexico-structural post-processing. Each of these steps is driven by a separate grammar, and all three steps draw on a common feature data base and lexicon. The grammars, the lexicon and the feature data base are referred to as the linguistic resources (even if they sometimes apply to a conceptual representation). The representations used by all instantiations of the tree transduction module in the framework are dependency tree structures. The main characteristics of all the dependency tree structures are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "\u2022 A dependency tree is unordered (in contrast with phrase structure trees, there is no ordering between the branches of the tree). \u2022 All the nodes in the tree correspond to lexemes (i.e., lexical heads) or concepts depending on the level of representation. In contrast with a phrase structure representation, there are no phrase-structure nodes labeled with nonterminal symbols. Labelled arcs indicate the dependency relationships between the lexemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "The first of these characteristics makes a dependency tree structure a very useful representation for MT and multilingual NLG, since it gives linguists a representation that allows them to abstract over numerous crosslinguistic divergences due to language specific ordering (Polgu~re, 1991) .", "cite_spans": [ { "start": 274, "end": 290, "text": "(Polgu~re, 1991)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "We have implemented 4 different types of dependency tree structures that can be used for NLG, MT or both: The DSyntSs and SSyntSs correspond closely to the equivalent structures of the Meaning-Text Theory (MTT; Mel'cuk, 1988) : both structures are unordered syntactic representations, but a DSyntS only includes full meaning-bearing lexemes while a SSyntS also contains function words such as determiners, auxiliaries, and strongly governed prepositions.", "cite_spans": [ { "start": 205, "end": 210, "text": "(MTT;", "ref_id": null }, { "start": 211, "end": 225, "text": "Mel'cuk, 1988)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "In the implemented applications, the DSyntSs are the pivotal representations involved in most transformations, as this is also often the case in practice in linguistic-based MT (Hutchins and Somers, 1997) . Figure 2 illustrates a DSyntS from a meteorological application, MeteoCogent (Kittredge and Lavoie, 1998) , represented using the standard graphical notation and also the RealPro ASCII notation used internally in the framework .", "cite_spans": [ { "start": 177, "end": 204, "text": "(Hutchins and Somers, 1997)", "ref_id": "BIBREF2" }, { "start": 284, "end": 312, "text": "(Kittredge and Lavoie, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "As Figure 2 illustrates, there is a straightforward mapping between the graphical notation and the ASCII notation supported in the framework. This also applies for all the transformation rules in the framework which illustrates the declarative nature of our approach, The ConcSs correspond to the standard framelike structures used in knowledge representation, with labeled arcs corresponding to slots. We have used them only for a very limited meteorological domain (in MeteoCogent), and we imagine that they will typically be defined in a domain-specific manner. Figure 3 illustrates the mapping between an interlingua defined as a ConcS and a corresponding English DSyntS. This example, also taken from MeteoCogent, illustrates that the conceptual interlingua in NLG can be closer to a database representation of domain data than to its linguistic representations.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 565, "end": 573, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "I 1 LOW -5 TO ' t LOw ( A'I~R -5 ATTR TO ( il HIGH ( A']I~R 20 ) ) ) Low -S to high 20", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "As mentioned in (Polgu~re, 1991) , the high level of abstraction of the ConcSs makes them a suitable interlingua for multilingual NLG since they bridge the semantic discrepancies between languages, and they can be produced easily from the domain data. However, most off-the-shelf parsers available for MT produce only syntactic structures, thus the DSyntS level is often more suitable for transfer. Finally, the PSyntSs correspond to the parser outputs represented using RealPro's dependency structure formalism. The PSyntSs may not be valid directly for realization or transfer since they may contain unsupported features or dependency relations. However, the PSyntSs are represented in a way to allow the framework to convert them into valid DSyntS via lexicostructural processing. This conversion is done via conversion grammars customized for each parser. There is a practical need to convert one syntactic formalism to another and so far we have implemented converters for three off-theshelf parsers (Palmer et al., 1998) .", "cite_spans": [ { "start": 16, "end": 32, "text": "(Polgu~re, 1991)", "ref_id": "BIBREF5" }, { "start": 1005, "end": 1026, "text": "(Palmer et al., 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "The Framework's Tree Transduction Module", "sec_num": "2" }, { "text": "As mentioned previously, the framework is composed of instantiations of the tree transduction module shown in Figure 1 . Each module has the following resources: This consists of lexico-structural mapping rules for transforming the output structures before they can be processed by the next module. As for the preprocessing rules, these rules can be used to fix some discrepancies between modules.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 118, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Our representation of the lexicon at the lexical level (as opposed to conceptual) is similar to the one found in RealPro. Figure 4 shows a specification for the lexeme SELL. This lexeme is defined as a verb of regular morphology with two lexical-structural mappings, the first one introducing the preposition TO for its 3 r\u00b0 actant, and the preposition FOR for its 4 th actant: (a seller) X1 sells (merchandise) X2 to (a buyer) X3 for (a price) X4. What is important is that each mapping specifies a transformation between structures at different levels of representation but that are represented in one and the same representation formalism (DSyntS and SSyntS in this case).", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 130, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "As we will see below, grammar rules are also expressed in a similar way. At the conceptual level, the conceptual lexicon associates lexical-structural mapping with concepts in a similar way. Figure 5 illustrates the mapping at the deep-syntactic level associated with the concept #TEMPERATURE. Except for the slight differences in the labelling, this type of specification is similar to the one used on the lexical level. The first mapping rule corresponds to one of the lexico-structural transformations used to convert the interlingual ConcS of Figure 3 to the corresponding DSyntS.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 199, "text": "Figure 5", "ref_id": null }, { "start": 547, "end": 555, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "ZONCEPT: #TEMPERATURE 5EXICAL:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "[ L~-RULE: #TEMPERATURE ( #minimum SX #maxim~ $Y <--> LOW ( ATTR $X ATTR TO ( II HIGH ( ATTR SY ) ) ) LEX-RULE: #TEMPERATURE ( #minim~ SX <--> LOW ( ATTR $X ) LEX-RULE: #TEMPE~TURE ( #maximum $X <--> HIGH ( ATTR SX ) ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Figure 5: Specification of Concept #TEMPERATURE Note that since each lexicon entry can have more than one lexical-structural mapping rule, the list of these rules represents a small grammar specific to this lexeme or concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Realization grammar rules of the main grammar include generic mapping rules (which are not lexeme-specific) such as the DSyntS-rule illustrated in Figure 6 , for inserting a determiner. The lexicon formalism has also been extended to implement lexeme-specific lexico-structural transfer rules. Figure 7 shows the lexicostructural transfer of the English verb lexeme MOVE to French implemented for a military and weather domain (Nasr et al., 1998 ):", "cite_spans": [ { "start": 427, "end": 445, "text": "(Nasr et al., 1998", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 294, "end": 302, "text": "Figure 7", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Cloud will move into the western regions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Des nuages envahiront les rdgions ouest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "They moved the assets forward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "-.9 lls ont amen~ les ressources vers l 'avant. More general lexico-structural rules for transfer can also be implemented using our grammar rule formalism. Figure 8 gives an English-French transfer rule applied to a weather domain for the transfer of a verb modified by the adverb ALMOST: More details on how the structural divergences described in (Dorr, 1994) can be accounted for using our formalism can be found in .", "cite_spans": [ { "start": 349, "end": 361, "text": "(Dorr, 1994)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 156, "end": 164, "text": "Figure 8", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "The Framework's Linguistic Resources", "sec_num": "4" }, { "text": "Before being processed, the rules are first compiled and indexed for optimisation. Each module applies the following processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Rule Processing", "sec_num": "5" }, { "text": "The rules are assumed to be ordered from most specific to least specific. The application of the rules to the structures is top-down in a recursive way from the f'n-st rule to the last. For the main grammar, before applying a grammar rule to a given node, dictionary lookup is carried out in order to first apply the lexeme-or conceptspecific rules associated with this node. These are also assumed to be ordered from the most specific to the least specific.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Rule Processing", "sec_num": "5" }, { "text": "If a lexico-structural transformation involves switching a governor node with one of its dependents in the tree, the process is reapplied with the new node governor. When no more rules can be applied, the same process is applied to each dependent of the current governor. When all nodes have been processed, the processing is completed, 6 Using the Framework to build Applications Figure 9 shows how different instantiations of the tree transduction module can be combined to build NLP applications. The diagram does not represent a particular system, but rather shows the kind of transformations that have been implemented using the framework, and how they interact. Each arrow represents one type of processing implemented by an instantiation of the tree transduction module. Each triangle represents a different level of representation. For example, in Figure 9 , starting with the \"Input Sentence LI\" and passing through Parsing, Conversion, Transfer, DSyntS Realization and SSyntS Realization to \"Generated Sentence L2\" we obtain an Ll-to-L2 MT system. Starting with \"Sentence Planning\" and passing through DSyntS Realization, and SSyntS Realization (including linearization and inflection) to \"Generated Sentence LI\", we obtain a monolingual NLG system for L1.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 9", "ref_id": "FIGREF9" }, { "start": 856, "end": 864, "text": "Figure 9", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "The Rule Processing", "sec_num": "5" }, { "text": "So far the framework has been used successfully for building a wide variety of applications in different domains and for different languages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scope of the Framework ~Conversion bl Parsed", "sec_num": null }, { "text": "level for the domains of meteorology (MeteoCogent; Kittredge and Lavoie, 1998) and object modeling (ModelExplainer; . \u2022 Generation of English text from conceptual interlingua for the meteorology domain (MeteoCogent). (The design of the interlingua can also support the generation of French but this functionality has not yet been implemented.)", "cite_spans": [ { "start": 51, "end": 78, "text": "Kittredge and Lavoie, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "NLG: \u2022 Realization of English DSyntSs via SSyntS", "sec_num": null }, { "text": "MT:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLG: \u2022 Realization of English DSyntSs via SSyntS", "sec_num": null }, { "text": "\u2022 Transfer on the DSyntS level and realization via SSyntS level for English--French, English--Arabic, English---Korean and Korean--English. Translation in the meteorology and battlefield domains (Nasr et al., 1998) .", "cite_spans": [ { "start": 195, "end": 214, "text": "(Nasr et al., 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "NLG: \u2022 Realization of English DSyntSs via SSyntS", "sec_num": null }, { "text": "\u2022 Conversion of the output structures from off-the-shelf English, French and Korean parsers to DSyntS level before their processing by the other components in the framework (Palmer et al., 1998) .", "cite_spans": [ { "start": 173, "end": 194, "text": "(Palmer et al., 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "NLG: \u2022 Realization of English DSyntSs via SSyntS", "sec_num": null }, { "text": "Empirical results obtained from the applications listed in Section 6 have shown that the approach used in the framework is flexible enough and easily portable to new domains, new languages, and new applications. Moreover, the time spent for development was relatively short compared to that formerly required in developing similar types of applications. Finally, as intended, the limited computational power of the transduction module, as well as careful implementation, including the compilation of declarative linguistic knowledge to Java, have ensured efficient run-time behavior. For example, in the MT domain we did not originally plan for a separate conversion step from the parser output to DSyntS. However, it quickly became apparent that there was a considerable gap between the output of the parsers we were using and the DSyntS representation that was required, and furthermore, that we could use the tree transduction module to quickly bridge this gap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Nevertheless, our tree transduction-based approach has some important limitations. In particular, the framework requires the developer of the transformation rules to maintain them and specify the order in which the rules must be applied. For a small or a stable grammar, this does not pose a problem. However, for large or rapidly changing grammar (such as a transfer grammar in MT that may need to be adjusted when switching from one parser to another), the burden of the developer's task may be quite heavy. In practice, a considerable amount of time can be spent in testing a grammar after its revision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Another major problem is related to the maintenance of both the grammar and the lexicon. On several occasions during the development of these resources, the developer in charge of adding lexical and grammatical data must make some decisions that are domain specific. For example, in MT, writing transfer rules for terms that can have several meanings or uses, they may simplify the problem by choosing a solution based on the context found in the current corpus, which is a perfectly natural strategy. However, later, when porting the transfer resources to other domains, the chosen strategy may need to be revised because the context has changed, and other meanings or uses are found in the new corpora. Because the current approach is based on handcrafted rules, maintenance problems of this sort cannot be avoided when porting the resources to new domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "An approach such as the one described in (Nasr et al., 1998; seems to be solving a part of the problem when it uses corpus analysis techniques for automatically creating a first draft of the lexical transfer dictionary using statistical methods. However, the remaining work is still based on handcrafting because the developer must refine the rules manually. The current framework offers no support for merging handcrafted rules with new lexical rules obtained statistically while preserving the valid handcrafted changes and deleting the invalid ones. In general, a better integration of linguistically based and statistical methods during all the development phases is greatly needed.", "cite_spans": [ { "start": 41, "end": 60, "text": "(Nasr et al., 1998;", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "The framework represents a generalization of several predecessor NLG systems based on Meaning-Text Theory: FoG (Kittredge and Polgu~re, 1991) , LFS (Iordanskaja et al., 1992) , and JOYCE (Rambow and Korelsky, 1992) . The framework was originally developed for the realization of deep-syntactic structures in NLG .", "cite_spans": [ { "start": 111, "end": 141, "text": "(Kittredge and Polgu~re, 1991)", "ref_id": "BIBREF5" }, { "start": 148, "end": 174, "text": "(Iordanskaja et al., 1992)", "ref_id": "BIBREF3" }, { "start": 187, "end": 214, "text": "(Rambow and Korelsky, 1992)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "It was later extended for generation of deep-syntactic structures from conceptual interlingua (Kittredge and Lavoie, 1998) . Finally, it was applied to MT for transfer between deep-syntactic structures of different languages (Palmer et al., 1998) . The current framework encompasses the full spectrum of such transformations, i.e. from the processing of conceptual structures to the processing of deep-syntactic structures, either for NLG or MT.", "cite_spans": [ { "start": 94, "end": 122, "text": "(Kittredge and Lavoie, 1998)", "ref_id": "BIBREF4" }, { "start": 225, "end": 246, "text": "(Palmer et al., 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Compared to its predecessors (Fog, LFS, JOYCE), our approach has obvious advantages in uniformity, declarativity and portability. The framework has been used in a wider variety of domains, for more languages, and for more applications (NLG as well as MT). The framework uses the same engine for all the transformations at all levels because all the syntactic and conceptual structures are represented as dependency tree structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "In contrast, the predecessor systems were not designed to be rapidly portable. These systems used programming languages or scripts for the implementation of the transformation rules, and used different types of processing at different levels of representation. For instance, in LFS conceptual structures were represented as graphs, whereas syntactic structures were represented as trees which required different types of processing at these two levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Our approach also has some disadvantages compared with the systems mentioned above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Our lexico-structural transformations are far less powerful than those expressible using an arbitrary programming language. In practice, the formalism that we are using for expressing the transformations is inadequate for long-range phenomena (inter-sentential or intra-sentential), including syntactic phenomena such as longdistance wh-movement and discourse phenomena such as anaphora and ellipsis. The formalism could be extended to handle intrasentential syntactic effects, but inter-sentential discourse phenomena probably require procedural rules in order to access lexemes in other sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "In fact, LFS and JOYCE include a specific module for elliptical structure processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "Similarly, the limited power of the tree transformation rule formalism distinguishes the framework from other NLP frameworks based on more general processing paradigms such as unification of FUF/SURGE in the generation domain (Elhadad and Robin, 1992) .", "cite_spans": [ { "start": 226, "end": 251, "text": "(Elhadad and Robin, 1992)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Lessons Learned Using the Framework", "sec_num": "7" }, { "text": "The framework is currently being improved in order to use XML-based specifications for representing the dependency structures and the transformation rules in order to offer a more standard development environment and to facilitate the framework extension and maintenance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Status", "sec_num": "9" }, { "text": "History of the Framework and Comparison with Other Systems", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A first implementation of the framework (C++ processor and ASCII formalism for expressing the lexico-structural transformation rules) applied to NLG was developed under SBIR F30602-92-C-0015 awarded by USAF Rome Laboratory.The extensions to MT were developed under SBIR DAAL01-97-C-0016 awarded by the Army Research Laboratory. The Java implementation and general improvements of the framework were developed under SBIR DAAD17-99-C-0008 awarded by the Army Research Laboratory. We are thankful to Ted Caldwell, Daryl McCullough, Alexis Nasr and Mike White for their comments and criticism on the work reported in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Machine translation divergences: A formal description and proposed solution", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "", "pages": "597--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J. (1994) Machine translation divergences: A formal description and proposed solution. In Computational Linguistics, vol. 20, no. 4, pp. 597- 635.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Controlling Content Realization with Functional Unification Grammars", "authors": [ { "first": "M", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "J", "middle": [], "last": "Robin", "suffix": "" }, { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 1992, "venue": "Aspects of Automated Natural Language Generation", "volume": "", "issue": "", "pages": "89--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elhadad, M. and Robin, J. (1992) Controlling Content Realization with Functional Unification Grammars. In Aspects of Automated Natural Language Generation, Dale, R., Hovy, E., Rosner, D. and Stock, O. Eds., Springer Verlag, pp. 89- 104.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Introduction to Machine Translation", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Hutchins", "suffix": "" }, { "first": "H", "middle": [ "L" ], "last": "Somers", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hutchins, W. J. and Somers, H. L. (1997) An Introduction to Machine Translation. Academic Press, second edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generation of Extended Bilingual Statistical Reports", "authors": [ { "first": "L", "middle": [], "last": "Iordanskaja", "suffix": "" }, { "first": "M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "R", "middle": [], "last": "Kittredge", "suffix": "" }, { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "A", "middle": [], "last": "Polgu6re", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1019--1023", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iordanskaja, L., Kim, M., Kittredge, R., Lavoie, B. and Polgu6re, A. (1992) Generation of Extended Bilingual Statistical Reports. In Proceedings of the 15th International Conference on Computational Linguistics, Nantes, France, pp. 1019-1023.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MeteoCogent: A Knowledge-Based Tool For Generating Weather Forecast Texts", "authors": [ { "first": "R", "middle": [], "last": "Kittredge", "suffix": "" }, { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the American Meteorological Society AI Conference", "volume": "", "issue": "", "pages": "80--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kittredge, R. and Lavoie, B. (1998) MeteoCogent: A Knowledge-Based Tool For Generating Weather Forecast Texts. In Proceedings of the American Meteorological Society AI Conference (AMS-98), Phoenix, Arizona, pp. 80--83.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dependency Grammars for Bilingual Text Generation: Inside FoG's Stratificational Models", "authors": [ { "first": "R", "middle": [], "last": "Kittredge", "suffix": "" }, { "first": "A", "middle": [], "last": "Polgu~re", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the International Conference on Current Issues in Computational Linguistics", "volume": "", "issue": "", "pages": "318--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kittredge, R. and Polgu~re, A. (1991) Dependency Grammars for Bilingual Text Generation: Inside FoG's Stratificational Models. In Proceedings of the International Conference on Current Issues in Computational Linguistics, Penang, Malaysia, pp. 318-330.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Interlingua for Bilingual Statistical Reports", "authors": [ { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" } ], "year": 1995, "venue": "Notes of IJCAI-95 Workshop on Multilingual Text Generation, Montr6al, Canada", "volume": "", "issue": "", "pages": "84--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavoie, B. (1995) Interlingua for Bilingual Statistical Reports. In Notes of IJCAI-95 Workshop on Multilingual Text Generation, Montr6al, Canada, pp. 84---94.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Fast and Portable Realizer for Text Generation Systems", "authors": [ { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "265--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavoie, B. and Rambow, O. (1997) A Fast and Portable Realizer for Text Generation Systems. In Proceedings of the Fifth Conference on Applied Natural Language Processing, Washington, DC., pp. 265-268.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Customizable Descriptions of Object-Oriented Models", "authors": [ { "first": "B", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "E", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "253--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavoie, B., Rambow, O. and Reiter, E. (1997) Customizable Descriptions of Object-Oriented Models. In Proceedings of the Fifth Conference on Applied Natural Language Processing, Washington, DC., pp. 253-256.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dependency Syntax", "authors": [ { "first": "I", "middle": [], "last": "Mel'cuk", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mel'cuk, I. (1988) Dependency Syntax. State University of New York Press, Albany, NY.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Enriching lexical transfer with crosslinguistic semantic features", "authors": [ { "first": "A", "middle": [], "last": "Nasr", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosenzweig", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Interlingua Workshop at the MT Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasr, A., Rambow, O., Palmer, M. and Rosenzweig, J. (1998) Enriching lexical transfer with cross- linguistic semantic features. In Proceedings of the Interlingua Workshop at the MT Summit, San Diego, California.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Rapid Prototyping of Domain-Specific Machine Translation Systems", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "A", "middle": [], "last": "Nasr", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Third Conference on Machine Translation in the Americas (AMTA-98)", "volume": "", "issue": "", "pages": "95--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, M., Rambow, O. and Nasr, A. (1998) Rapid Prototyping of Domain-Specific Machine Translation Systems. In Proceedings of the Third Conference on Machine Translation in the Americas (AMTA-98), PA, USA, pp. 95-102.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Everything has not been said about interlinguae: the case of multi-lingual text generation system", "authors": [ { "first": "A", "middle": [], "last": "Polgu6re", "suffix": "" } ], "year": 1991, "venue": "Proc. of Natural Language Processing Pacific Rim Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Polgu6re, A. (1991) Everything has not been said about interlinguae: the case of multi-lingual text generation system. In Proc. of Natural Language Processing Pacific Rim Symposium, Singapore.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Applied Text Generation", "authors": [ { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "T", "middle": [], "last": "Korelsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 6th International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rambow, O. and Korelsky, T. (1992) Applied Text Generation. In Proceedings of the 6th International Workshop on Natural Language Generation, Trento, Italy, pp. 40--47.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automated translation at Grenoble University", "authors": [ { "first": "B", "middle": [], "last": "Vauquois", "suffix": "" }, { "first": "C", "middle": [], "last": "Boitet", "suffix": "" } ], "year": 1985, "venue": "Computational Linguistics", "volume": "11", "issue": "", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vauquois, B. and Boitet C. (1985) Automated translation at Grenoble University. In Computational Linguistics, Vol. 11, pp. 28-36.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "All linguistic resources are represented in a declarative manner. An instantiation of the tree transduction module consists of a specification of the linguistic resources." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Figure 1: Design of the Tree Transduction Module 3 The Framework's Representations" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "\u2022 Deep-syntactic structures (DSyntSs); \u2022 Surface syntactic structures (SSyntSs); \u2022 Conceptual structures (ConcSs); \u2022 Parsed syntactic structures (PSyntSs)." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "DSyntS (Graphical and ASCII Notation)" }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "ConcS Interlingua and English DSyntS" }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": "Specification of Lexeme SELL" }, "FIGREF6": { "uris": null, "type_str": "figure", "num": null, "text": "Deep-Syntactic Rule for Determiner Insertion" }, "FIGREF7": { "uris": null, "type_str": "figure", "num": null, "text": "Lexico-Structural Transfer of English Lexerne MOVE to French" }, "FIGREF8": { "uris": null, "type_str": "figure", "num": null, "text": "English to French Lexico-Structural Transfer Rule with Verb Modifier ALMOST" }, "FIGREF9": { "uris": null, "type_str": "figure", "num": null, "text": "Scope of the Framework's Transformations" }, "TABREF0": { "content": "
\u2022 Lexicon: This consists of the available
lexemes or concepts, depending on whether
the module works at syntactic or conceptual
level. Each lexeme and concept is defined
with its features, and may contain specific
lexico-structural rules: transfer rules for MT,
mapping rules to the next level of
representation for surface realization of
DSyntS or lexicalization of ConcS.
\u2022 Main Grammar: This consists of the lexico-
structural mapping rules that apply at this
level and which are not lexeme-or concept-
specific (e.g. DSynt-rules for the DSynt-
module, Transfer-rules for the Transfer
module, etc.)
\u2022 Preprocessing grammar: This consists of
the lexico-structural mapping rules for
transforming the input structures in order to
make them compliant with the main
grammar, if this is necessary. Such rules are
used to integrate new modules together
when discrepancies in the formalism need to
be fixed. This grammar can also be used
for adding default features (e.g. setting the
default number of nouns to singular) or for
applying default transformations (e.g.
replacing non meaning-bearing lexemes
with features).
Postprocessing grammar:
", "type_str": "table", "num": null, "text": "Feature Data-Base: This consists of the feature system defining available features and their possible values in the module.", "html": null } } } }