{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:08.292384Z" }, "title": "Generating Varied Training Corpora in Runyankore Using a Combined Semantic and Syntactic, Pattern-Grammar-based Approach", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research Africa", "location": { "addrLine": "45 Juta Street", "settlement": "Braamfontein Johannesburg", "country": "South Africa" } }, "email": "joan.byamugisha@ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Machine learning algorithms have been applied to achieve high levels of accuracy in tasks associated with the processing of natural language. However, these algorithms require large amounts of training data in order to perform efficiently. Since most Bantu languages lack the required training corpora because they are computationally under-resourced, we investigated how to generate a large varied training corpus in Runyankore, a Bantu language indigenous to Uganda. We found the use of a combined semantic and syntactic, pattern and grammar-based approach to be applicable to this purpose, and used it to generate one million sentences, both labelled and unlabelled, which can be applied as training data for machine learning algorithms. The generated text was evaluated in two ways: (1) assessing the semantics encoded in word embeddings obtained from the generated text, which showed correct word similarity; and (2) applying the labelled data to tasks such as sentiment analysis, which achieved satisfactory levels of accuracy.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Machine learning algorithms have been applied to achieve high levels of accuracy in tasks associated with the processing of natural language. However, these algorithms require large amounts of training data in order to perform efficiently. Since most Bantu languages lack the required training corpora because they are computationally under-resourced, we investigated how to generate a large varied training corpus in Runyankore, a Bantu language indigenous to Uganda. We found the use of a combined semantic and syntactic, pattern and grammar-based approach to be applicable to this purpose, and used it to generate one million sentences, both labelled and unlabelled, which can be applied as training data for machine learning algorithms. The generated text was evaluated in two ways: (1) assessing the semantics encoded in word embeddings obtained from the generated text, which showed correct word similarity; and (2) applying the labelled data to tasks such as sentiment analysis, which achieved satisfactory levels of accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The application of machine learning algorithms to natural language processing, generation, and understanding has led to the development of highly accurate systems for information extraction, text classification, summarization, question answering, machine translation, image and video captioning (Otter et al., 2018) , and language learning (assessment, support, and analytics) (Vajjala, 2018) . However, large training sets are critical to achieving high levels of accuracy, and, for some applications, creating these training sets is the most time-consuming and expensive part of applying machine learning algorithms (Ratner et al., 2016) . This has resulted in the absence, to a larger extent, of machine learning applications for the very under-resourced Bantu languages. A possible solution to this problem is to generate large datasets that can then be used as training data.", "cite_spans": [ { "start": 295, "end": 315, "text": "(Otter et al., 2018)", "ref_id": "BIBREF27" }, { "start": 377, "end": 392, "text": "(Vajjala, 2018)", "ref_id": "BIBREF35" }, { "start": 618, "end": 639, "text": "(Ratner et al., 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Artificially creating more training data has been applied to speech (Hannun et al., 2014) , image (Taylor and Nitschke, 2017) , and text (D'hondt et al., 2017; Ratner et al., 2016) . Our interest lies in textual data, specifically, a method for how to generate a large training corpus in Runyankore, a Bantu language indigenous to Uganda. We posed the following questions:", "cite_spans": [ { "start": 68, "end": 89, "text": "(Hannun et al., 2014)", "ref_id": "BIBREF14" }, { "start": 98, "end": 125, "text": "(Taylor and Nitschke, 2017)", "ref_id": "BIBREF33" }, { "start": 137, "end": 159, "text": "(D'hondt et al., 2017;", "ref_id": "BIBREF11" }, { "start": 160, "end": 180, "text": "Ratner et al., 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. What are the existing approaches for generating large training textual corpora?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Which one(s) can be applied to generate a varied, semantically coherent training corpus in Runyankore?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our aim was to generate very large corpora, both labelled and unlabelled, which could be used for sentiment and morphological analysis, and to assess word similarity, respectively. We found the use of a combined semantic and syntactic, pattern and grammar-based approach sufficient to generate one million Runyankore sentences , both labelled and unlabelled, from a dictionary of terms categorized into their appropriate parts of speech. We used generation patterns to handle the phrasal structure that comprised: adjectives, adverbs, conjunctions, prepositions, nouns, and verbs. A Context-Free Grammar (CFG) was used for verb conjugation in the simple present, present continuous, near future, remote past, near past, participial present continuous, and participial near future tenses; both primary and secondary negation; as well as the applicative, causative, and passive extensions. The evaluation of the generated text showed that it was correctly semantically related, and applicable to supervised machine learning tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is arranged as follows: Section 2 provides some basics on Runyankore and its complex grammatical structure; Section 3 discusses the existing approaches for generating large training corpora and their applicability to Runyankore; Section 4 details how we generated a large Runyankore corpus and evaluated its level of variation, applicability, and word similarity; and we discuss the implications of this work in Section 5 and conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Runyankore is a Bantu language spoken in the south-western part of Uganda (Asiimwe, 2014; Tayebwa, 2014; Turamyomwe, 2011) . It has an agglutinating morphology, where words are formed by adding affixes to their bases, and each affix carries meaning such as tense and aspect (Nurse and Philippson, 2003; Turamyomwe, 2011) as shown in the example below.", "cite_spans": [ { "start": 74, "end": 89, "text": "(Asiimwe, 2014;", "ref_id": "BIBREF0" }, { "start": 90, "end": 104, "text": "Tayebwa, 2014;", "ref_id": "BIBREF31" }, { "start": 105, "end": 122, "text": "Turamyomwe, 2011)", "ref_id": "BIBREF34" }, { "start": 274, "end": 302, "text": "(Nurse and Philippson, 2003;", "ref_id": "BIBREF26" }, { "start": 303, "end": 320, "text": "Turamyomwe, 2011)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "Runyankore: Ninkimumanya.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "Morphemes: ni-n-ki-mu-many-a English: I still know him/her.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "In the above example, the morpheme ni is the continuous marker; n is the pronoun 'I'; ki is the persistive aspect that translates to 'still'; mu is the third-person pronoun for 'him/her'; many is the verb-root for 'know'; and a is the indicative final vowel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "Like all Bantu languages, Runyankore assigns all nouns to a class, and it has 20 noun classes (Excluding class 19) (Asiimwe, 2014) . The simple noun comprises a prefix and a stem; for example, omuntu 'person' comprises the class prefix o-mu-(where o is the initial vowel or augment), and the stem -ntu. Additionally, the noun class (NC) is at the heart of an extensive system of concordial agreement that governs agreement in verbs, adjectives, possessives, subject, object, etc. (Katamba, 2003; Maho, 1999; Tayebwa, 2014) . Table 1 shows the noun class (NC) with its number and class prefix, as well as the subject concord (SC), possessive concord (PC), and adjective concord (AC).", "cite_spans": [ { "start": 115, "end": 130, "text": "(Asiimwe, 2014)", "ref_id": "BIBREF0" }, { "start": 480, "end": 495, "text": "(Katamba, 2003;", "ref_id": "BIBREF17" }, { "start": 496, "end": 507, "text": "Maho, 1999;", "ref_id": "BIBREF23" }, { "start": 508, "end": 522, "text": "Tayebwa, 2014)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 525, "end": 532, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "The default phrasal structure in Runyankore, and across Bantu languages, is Subject-Verb-Object (SVO), and the noun precedes its modifiers within a noun phrase (Nurse and Philippson, 2003) . Runyankore's verbal morphology comprises fourteen tenses, six aspects, and nine verbal extensions, and", "cite_spans": [ { "start": 160, "end": 188, "text": "(Nurse and Philippson, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "NC SC PC AC 1. o-mu- -a- o-wa o-mu- 2. a-ba- -ba- a-ba a-ba- 3. o-mu- -gu- o-gwa o-mu- 4. e-mi-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "-gi-e-ya e-mi-5. ei-/e-ri--ri-e-rya e-ri-6. a-ma--ga-a-ga a-ma-7. e-ki--ki-e-kya e-ki-8. e-bi--bi-e-bya e-bi-9. e-n-/e-m--e-e-ya e-n-10. e-n-/em--zi-e-za e-n- the general verbal structure is as below (Turamyomwe, 2011): Table 2 from Turamyomwe (2011) shows the different 'slots' in Runyankore's verbal morphology, as well as the morphemes which occupy these slots.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 324, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "11. o-ru- -ru- o-rwa o-ru- 12. a-ka- -ka- a-ka -a-ka- 13. o-tu- -tu- o-twa o-tu- 14. o-bu- -bu- o-bwa o-bu - 15. o-ku- -ku- o-kwa o-ku- 16. a-ha- -ha- a-ha a-ha- 17. o-ku- -ha- - a-ha- 18. o-mu- -ha- - a-ha- 20. o-gu- -gu- o-gwa o-gu- 21. a-ga- -ga- a-ga a-ga-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "The 'PreInitial' contains the primary negation or continuous marker; the 'initial,', the NC-based subject concord; 'the 'PostInitial', the secondary negative; the 'Formative', all tenses except the near past tense; the 'Limitative', the persistive aspect; the 'Infix', the NC-based object concord; the 'Extensions', that specify valency-changing categories and include the causative, applicative, stative, reciprocal, reversive, repetitive, intensive, instrumental, and passive; and the 'Final' contains morphemes associated with mood (indicative or subjunctive), the near past tense, locatives, emphatic, or declarative (Turamyomwe, 2011). In this section, we only discuss the approaches used to produce large general-purpose corpora that are used in the applications stated in Section 1. We therefore do not include methods for taskoriented training data generation such as Gardent et al. (2017) ; Lebret et al. (2016) ; Wen et al. (2015). We instead focus on four approaches: thesaurus inflation, data counterfeiting, weak supervision, and a combined semantic and syntactic, rule-based and statistical approach. Thesaurus inflation involves probabilistically replacing terms with their synonyms (Zhang and Le-Cun, 2015) . Data counterfeiting is the process of delexicalizing the annotated values from existing training data, and then randomly replacing them with similar related values (Wen et al., 2016) . In weak supervision, training documents are deliberately noisily annotated to produce weighted low quality training data, and the weights are used in a loss function to enable noise-aware training Ratner et al., 2016 . Weak supervision focuses on generating labelled training data, and its use was found to result in training on larger and more diverse corpora during OCR postcorrection (D'hondt et al., 2017) . The combined semantic and syntactic, rule-based and statistical approach has been applied by ForgeAI and comprises: (1) a grammatical model derived from a Probabilistic Context-Free Grammar (PCFG) and refined using human annotations, which learns the grammar that characterizes a particular event; (2) semantic planning, built with a probabilistic graphical model, which decides the semantically relevant roles and tokens to include in an expression; and (3) a surface realizer, which converts a semantic plan into a grammatically correct text (Neely, 2018) .", "cite_spans": [ { "start": 876, "end": 897, "text": "Gardent et al. (2017)", "ref_id": "BIBREF13" }, { "start": 900, "end": 920, "text": "Lebret et al. (2016)", "ref_id": "BIBREF22" }, { "start": 1198, "end": 1222, "text": "(Zhang and Le-Cun, 2015)", "ref_id": null }, { "start": 1389, "end": 1407, "text": "(Wen et al., 2016)", "ref_id": "BIBREF36" }, { "start": 1607, "end": 1626, "text": "Ratner et al., 2016", "ref_id": "BIBREF29" }, { "start": 1797, "end": 1819, "text": "(D'hondt et al., 2017)", "ref_id": null }, { "start": 2366, "end": 2379, "text": "(Neely, 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "Thesaurus inflation, data counterfeiting, and weak supervision all rely on working on existing corpora (labelled data in the case of weak supervision), which Runyankore does not possess, thus creating a 'chicken and egg' problem. Also, thesaurus inflation and data counterfeiting introduce no new semantic variation in the generated text, and this is a key requirement for our preferred training corpus. The combined semantic and syntactic, rule-based and statistical approach is also limited because it requires statistical methods (PCFGs and probabilistic graphical models) which are obtained from large corpora, again, which Runyankore does not possess.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "Despite this, and unlike the first three approaches, we found that the drawbacks of the combined semantic and syntactic, rule-based and statistical approach can be overcome, with some modifications, in order to generate a large corpus in Runyankore. For example, the PCFGs can be substituted with a Context-Free-Grammar-based generator that has already been shown to produce simple verbs in Runyankore (Byamugisha et al., 2016b) and more complex verbs in isiZulu 1 . The semantic planning can be built using generation patterns that have been used in surface realizers for Runyankore (Byamugisha et al., 2016a (Byamugisha et al., , 2017b and isiZulu (Keet and Khumalo, 2014; . However, the use of patterns requires a means of providing enough variation in the patterns so as to generate a varied training corpus. We therefore investigated the use of a combined semantic and syntactic, pattern-grammarbased approach to generate a varied training corpus in Runyankore.", "cite_spans": [ { "start": 402, "end": 428, "text": "(Byamugisha et al., 2016b)", "ref_id": "BIBREF6" }, { "start": 584, "end": 609, "text": "(Byamugisha et al., 2016a", "ref_id": "BIBREF5" }, { "start": 610, "end": 637, "text": "(Byamugisha et al., , 2017b", "ref_id": "BIBREF8" }, { "start": 650, "end": 674, "text": "(Keet and Khumalo, 2014;", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Brief Background on Runyankore", "sec_num": "2" }, { "text": "From previous work on generating text in Runyankore, it has been shown that noun semantics play a crucial role in noun pluralization (Byamugisha et al., 2016c) , verb conjugation (Byamugisha et al., 2016b) , and the generation of other grammatical units such as quantifiers (Byamugisha et al., 2017a) . On the other hand, the syntactical structure of Runyankore is also taken into account during noun pluralization (Byamugisha et al., 2016c) and phonological conditioning (Byamugisha et al., 2016b) . This, together with evidence for the use of a grammar engine (Byamugisha et al., 2017a) and pattern-based generation (Byamugisha et al., 2016a) in Runyankore, are the basis for investigating the use of a combined semantic and syntactic, pattern-grammar-based approach to generate a Runyankore corpus that is large enough and has sufficient variation to be used as training data.. Given that there are supervised and unsupervised machine learning algorithms, we aimed to generate both labelled and unlabelled data, and focused on morphological analysis for the labels..", "cite_spans": [ { "start": 133, "end": 159, "text": "(Byamugisha et al., 2016c)", "ref_id": "BIBREF10" }, { "start": 179, "end": 205, "text": "(Byamugisha et al., 2016b)", "ref_id": "BIBREF6" }, { "start": 274, "end": 300, "text": "(Byamugisha et al., 2017a)", "ref_id": "BIBREF7" }, { "start": 415, "end": 441, "text": "(Byamugisha et al., 2016c)", "ref_id": "BIBREF10" }, { "start": 472, "end": 498, "text": "(Byamugisha et al., 2016b)", "ref_id": "BIBREF6" }, { "start": 618, "end": 644, "text": "(Byamugisha et al., 2016a)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Large Varied Training Corpora in Runyankore", "sec_num": "4" }, { "text": "We first extracted different parts of speech from a Runyankore dictionary (Taylor, 2009) . For both nouns and verbs, we only considered those that are applicable in multiple contexts (such as omuntu 'person' and reeb-'see'), and avoided nouns like egyora 'a cloth measure' and verbs like kusinsina 'stop oneself from saying'. We also avoided proper nouns unless they referred to time or locations. The annotation process on nouns for their sentiment, category, and noun class, on verbs for their type, subject, object, sentiment, and category, and on other parts of speech for their concord, phonological conditioning, and sentiment, was done manually, following the definitions and examples provided in the dictionary.", "cite_spans": [ { "start": 74, "end": 88, "text": "(Taylor, 2009)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "Nouns From 2548 singular nouns extracted from the dictionary, we selected 385 nouns. We only considered singular nouns because an existing Runyankore pluralizer (Byamugisha et al., 2016c) is available. We annotated each noun with its noun class, category, and sentiment. We identified 34 noun categories, and also accounted for their taxonomic relationships. Table 3 , it can be seen that a male kinship term (for example, grandfather) categorized as 'kin m' belongs to the superclass 'kins', that in turn belongs to the superclass 'humans' that is a subclass of 'animals', and this is a subclass of 'living' for all living things. Similarly, a fruit belongs to the superclasses 'food' and 'plants', and the latter is a subclass of 'living'.", "cite_spans": [ { "start": 161, "end": 187, "text": "(Byamugisha et al., 2016c)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 359, "end": 366, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "Verbs We selected 198 verbs from the 1330 extracted from the dictionary. As the verbs in the dictionary contain the infinitive ku, as well as the final vowel and verbal extensions, we further preprocessed the selected verbs to their roots, and annotated each with its subject category, sentiment, type, and object category. The subject categories correspond to the noun categories shown in Table 3, and we only considered seven verb types: action, catenative, copulative, dependent, performative, predicative, and stative. We also identified 28 object categories, which included whether the verb is intransitive, transitive, or ditransitive. From the categories shown in Table 4 , the subject and object of a verb can be obtained to produce a sentence. For example, the verb root ih for 'remove' is marked as having type 'dependent' and object category 'ditransitive locative'. A dependent verb requires a preposition, and the indirect object is a location, resulting in a pattern where the direct object is removed from somewhere.", "cite_spans": [], "ref_spans": [ { "start": 671, "end": 678, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "Other Parts of Speech For the other parts of speech, we extracted 21 adjectives, 6 adverbs, 7 conjunctions, and 8 prepositions. We annotated each with its concord (whether subject, adjective, relative, possessive, or pronomial), if phonological conditioning is required (and if so, what kind), and sentiment. The sentiment labels used here, as well as for nouns and verbs, are 'good', 'bad', 'none', and 'both'. The label 'both' is used where the sentiment of the part of speech can be either bad or good depending on the context in which it is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "Pattern Structures When determining pattern structures, we referred to the sentence structure used in the Runyankore newspaper Orumuri 2 . We aimed to cover the past, present, and future tenses, and based on a manual analysis of the tenses, aspects, and extensions used in this newspaper, we considered the simple present, present continuous, near future, remote past, near past, participial present continuous, and participial near future tenses. We also considered the applicative, causative, and passive extensions; the indicative and subjunctive moods; as well as primary and secondary negation. Algorithm 4.1 below shows a simple sentence pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "The pattern shown in Algorithm 4.1 is the simplest possible pattern, with the object concord conjugated in the verb, instead of stating the object explicitly. It can be enhanced to include adjectives, adverbs, negation, tense and aspect, pluralization, and sentiment. Algorithm 4.2 shows a more complicated pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "In Algorithm 4.2, the sentiment is used when selecting the noun, vverb, and adjective. The sentence output pattern shows the placement of the different parts of speech, as well as the use of two verb types, 'copulative' and 'stative'. {get the noun class of the noun} 5: vr \u2190 getV erbRoot( 'action') {Randomly get a verb root of type 'action'} 6: t \u2190 getT ense(tenses) {Randomly select a tense from the available tenses} 7: o \u2190 getObjectCategory (vr) {get the appropriate object category for the verb} 8: o \u2190 getN oun(o ) {Randomly obtain a noun based on the object category} 9: oc \u2190 getObjectConcord(nc) {Use the noun class to get the object concord} 10: sc \u2190 getSubjectConcord(nc) {Use the noun class to get the subject concord} 11: v \u2190 conjugateV erb(t, sc, oc, vr, fv) {Conjugate the verb for the tense t, object concord oc, and final vowel fv} 12: Result \u2190 \" n v \" {Generate the sentence} 13: return Result mar (CFG) to conjugate verbs in Runyankore (Byamugisha et al., 2016b) .", "cite_spans": [ { "start": 446, "end": 450, "text": "(vr)", "ref_id": null }, { "start": 955, "end": 981, "text": "(Byamugisha et al., 2016b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Materials And Methods", "sec_num": "4.1" }, { "text": "We extended the existing Runyankore CFGs to include the tenses and aspects observed in the sentences in the Orumuri newspaper. The slots in Table 2 formed the non-terminals in the CFG, while the morphemes formed the terminals. In the CFG shown below, IG is the non-terminal with the initial grouping, with a production rule for the PN, the 'PreInitial', IT, the 'Initial', and SN, the 'PostInitial'; FM is for the 'Formative'; LM, the 'Limitative'; IF, the 'Infix'; VR, the verb root; EX, the 'Extensions'; and FN the 'Final'.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Verb Conjugation", "sec_num": null }, { "text": "S \u2192 IG F M LM IF V R EX F N IG \u2192 P N IT SN P N \u2192 ti | ni IT \u2192 a | o | n | tu | mu | ba | gu | gi | ri | ga | ki | bi | e | zi | ru | tu | ka | bu | ku | gu | ga SN \u2192 ta F M \u2192 za | ka | riku | rikuza LM \u2192 ki IF \u2192 mu | ba | gu | gi | ri | ma | ki | bi | gi | zi | ru | tu | ka | bu | ha | gu | ga V R \u2192 verbRoot EX \u2192 w | er | erer | ir | zi | is | n | ur | uur | gur | V S | isP N F N \u2192 a | e | ire", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Conjugation", "sec_num": null }, { "text": "The above CFG accounts for rules stating which slots can and cannot co-occur. For example, the continuous marker ni cannot co-occur with the primary negative ti or the secondary negative ta (Turamyomwe, 2011). {get the noun class of the noun} 6: vr1 \u2190 getV erbRoot('copulative') {Randomly get a copulative verb root} 7: vr2 \u2190 getV erbRoot ('stative', s) {Randomly get a stative verb root based on the sentiment} 8: t \u2190 getT ense(tenses) {Randomly select a tense from the available tenses} 9: ac \u2190 getAdjectivalConcord(nc) {Use the noun class to get the adjectival concord} 10: ar \u2190 getAdjectivalRoot(s) {Randomly get an adjectival root based on the tense} 11: aj \u2190 getAdjective(nc, ar) {Obtain the full adjective using the adjectival root and concord} 12: av \u2190 getAdjective() {Randomly get an adverb} 13: sc \u2190 getSubjectConcord(nc) {Use the noun class to get the subject concord} 14: v1 \u2190 conjugateV erb (sc, vr1) {Conjugate the copulative verb with the subject concord sc} 15: v2 \u2190 conjugateV erb (t, sc, vr, fv) {Conjugate the stative verb for the tense t, subject concord sc, and final vowel fv} 16: Result \u2190 \" n aj v1 v2 av \" {Generate the sentence} 17: return Result", "cite_spans": [ { "start": 339, "end": 353, "text": "('stative', s)", "ref_id": null }, { "start": 904, "end": 913, "text": "(sc, vr1)", "ref_id": null }, { "start": 998, "end": 1013, "text": "(t, sc, vr, fv)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Verb Conjugation", "sec_num": null }, { "text": "The surface realizer was implemented as a Java application, with the verb conjugation based on the CFG Java tool by Xu et al. (2011) . From the annotated resources and the selected generation patterns, we generated text in seven tenses: the simple present tense, which has no tense morpheme; the present continuous tense that uses the continuous marker ni-; the near future tense, -zathat applies to the infinitive form of the verb; the remote past tense, -ka-; the near past tense, -ire; the participial present continuous tense, -riku-; and the participial near future tense, -rikuza-(Turamyomwe, 2011). All these tenses, except for the near past tense that is placed in the final slot, are placed in the formative slot in Table 2 .", "cite_spans": [ { "start": 116, "end": 132, "text": "Xu et al. (2011)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 725, "end": 732, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Training Data Generation", "sec_num": "4.2" }, { "text": "We also used the applicative (-er-and -erer-), causative (-ir-), and passive (-w-) extensions that are placed in the extensions slot in Table 2 ; the indicative (-a-) and subjunctive (-e) moods that are placed in the final slot; as well as primary negation (ti-) that is placed in the initial slot, and secondary negation (-ta-) that is placed in the post-initial slot in Table 2 (Turamyomwe, 2011).", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 372, "end": 379, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Training Data Generation", "sec_num": "4.2" }, { "text": "Of the seven conjunctions, four (haza, reero, kandi, and obwo) are different variations of 'and', thus the proceeding phrase should maintain the same sentiment as the preceding phrase. On the other hand, three of the conjunctions (kwonka, okwihaho, and baitu) are different variations of 'but', and should therefore change the sentiment of the proceeding phrase. Given a type of verb, a sentiment, and a noun category, sentiment change was implemented in three ways: (1) using an adjective or adverb of the opposite sentiment; (2) negating the verb, which would make a positive verb negative, and vice versa; and (3) changing the sentiment itself, and then using it to obtain verbs and nouns of this new sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data Generation", "sec_num": "4.2" }, { "text": "In order to vary the structure and content of each sentence, we randomly selected the sentence pattern to use, which specific part-of-speech to realize based on the different noun categories, verb types, and the sentiment of the adjectives, when to pluralize the nouns, as well as whether to change, negate, or keep the existing sentiment. We also performed phonological conditioning that is required during generation, where, due to the agglutinative structure of Runyankore, the generated text can contain letter combinations that do not exist in Runyankore phonology. When this occurs, phonological rules are used to make the required changes that reflect the sound change, and this is referred to as phonological conditioning (Maho, 1999) . Phonological conditioning was performed during noun pluralization, verb conjugation, and pattern realization, and was achieved through vowel coalescence (adding an extra vowel), vowel elision (deleting a vowel), vowel harmony (considering the presence of a nasal compound), vowel assimilation (replacing a vowel with an apostrophe), or by deleting or adding a consonant.", "cite_spans": [ { "start": 730, "end": 742, "text": "(Maho, 1999)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data Generation", "sec_num": "4.2" }, { "text": "Finally, a boolean flag was used to decide whether to generate labelled or unlabelled data. Table 5 shows the different tags that were considered for labelling the morphology of the generated text. These tags were based on the labels used in a Runyankore dictionary (Taylor, 2009) for different parts of speech, as well as the tags used in the morphological analyzers by Eiselen and Puttkammer (2014) ", "cite_spans": [ { "start": 266, "end": 280, "text": "(Taylor, 2009)", "ref_id": "BIBREF32" }, { "start": 371, "end": 400, "text": "Eiselen and Puttkammer (2014)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Training Data Generation", "sec_num": "4.2" }, { "text": "We generated a one million sentence generalpurpose domain independent corpus. We also generated labelled data, with labels for sentiment, parts-of-speech (such as noun, adjective, preposition, etc.) as well as the morphological units of the conjugated verb. From the 28 object categories, 7 tenses, 3 extensions, 8 major patterns, and 4 sentiment adjustment options, we created 18,816 different ways of varying the sentence structure for a single subject, verb, and object. Further variation is introduced by performing noun pluralization, having 34 different noun categories and 7 different verb types, as well as 7 different conjunctions for the 8 major patterns. We evaluated for the quality of the generated text using a task-based evaluation, where we applied the generated text to some supervised and unsupervised machine learning tasks. For the latter, we used FastText to obtain word vectors and assess the semantic relatedness from the generated text. We also trained and tested a sentiment analysis text classifier based on FastText .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Evaluation", "sec_num": "4.3" }, { "text": "Assessing Semantic Relatedness We obtained word vectors and queried for nearest neighbors. The query word was selected based on its semantic category, that is, whether it is a noun for people, plants, or animals, or an adjective. The examples in Table 6 show the query word and the first five results according to highest confidence.", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 253, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Results and Evaluation", "sec_num": "4.3" }, { "text": "Results omuntu (person) omugyesi (reaper), omutaahi (companion), omukoreesa (overseer), omushomesa (teacher), omukuru (elder) omuti (tree) omutumba (banana tree), omwani (coffee tree), omuzaabibu (grape or grapevine), omucungwa (orange), omugusha (sorghum) omukono (arm) omunwa (mouth), omutwe (head), eriino (tooth), enkokora (elbow), okuguru (leg) embwa (dog) embeba (rat), enkyende (monkey), empungu (bird of prey), enumi (bull), enyawaawa (green ibis) rungi (beautiful) rurungi (beautiful), rukuru (important), rirungi (beautiful), oruyonjo (clean/tidy), orurikutukura (pure) rofa (dirty) erirofa (dirty), eriruhire (tired), rigufu (short), erifiire (stupid), ribi (ugly) The results in Table 6 show that the semantics embedded in the generated text are correctly associated as similar.", "cite_spans": [], "ref_spans": [ { "start": 691, "end": 698, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Query Word", "sec_num": null }, { "text": "Performing Sentiment Analysis In order to perform sentiment analysis on the generated text, we also stored the sentiment of each sentence (whether good, bad, none, or both) in a separate file; each sentence labelled according to the FastText default style of ' label '. For example, a sentence with a 'bad' sentiment is labelled as: label bad omunywi mugufu naaba naatomera obugaari kandi omurofa mugufu naaba naatomera ekyarani, 'The short beer supplier spends time knocking over wheelbarrows and the short dirty one spends time knocking over sowing machines'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Word", "sec_num": null }, { "text": "We trained two models, one that accounts for all four sentiments, and another that only predicts 'good' or 'bad'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Word", "sec_num": null }, { "text": "Each sentiment has over 200,000 examples in the dataset ('good'=270,720, 'bad'=271,031, 'none'=207,796, and 'both'=250,453) . The four-sentiment model was trained on 800,000 sentences and tested on 200,000 sentences, and achieved 64% accuracy. The binary sentiment model had a dataset with 541,751 examples, and it was trained on 500,000 sentences and tested on 41,751 sentences, and achieved 77.3% accuracy. These results show a good first attempt at sentiment analysis for Runyankore.", "cite_spans": [ { "start": 56, "end": 123, "text": "('good'=270,720, 'bad'=271,031, 'none'=207,796, and 'both'=250,453)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Query Word", "sec_num": null }, { "text": "We investigated how to solve the problem of the lack of training data in Runyankore, and found several ways in which training data can be generated. We found the use of a combined semantic and syntactic, pattern-grammar-based approach to be applicable to the grammatical complexity and under-resourced state of Runyankore. Using this approach, we were able to generate one million labelled and unlabelled sentences in seven of Runyankore's 14 tenses. This large dataset can be used in both supervised and unsupervised machine learning algorithms for various tasks as shown in our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The effort required to generate this dataset is significant, as explained in Section 4.1. The grammatical aspects too are specific to Runyankore's morphology. Despite this, previous work has shown that the important text generation aspects-noun pluralization, verb conjugation, and pattern-based generation-can be generalized to other agglutinating Bantu languages. For noun pluralization, a generic noun pluralizer exists for agglutinating Bantu languages (Byamugisha et al., 2018) . Verb conjugation using CFGs has also been shown to be possible for isiZulu , another agglutinating Bantu language. Finally, the ability to bootstrap text generation patterns from one agglutinating Bantu language to another was shown in (Byamugisha, 2019) . We therefore hypothesize that, with some tailoring, this approach may be generalizable to other Bantu languages.", "cite_spans": [ { "start": 457, "end": 482, "text": "(Byamugisha et al., 2018)", "ref_id": "BIBREF9" }, { "start": 721, "end": 739, "text": "(Byamugisha, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Interestingly, the results from word similarity evaluation in Table 6 hint on the possibility of using this approach to identify the noun class (NC) of a noun. Generally, the classes of nouns in Bantu languages are based on the semantics of the noun. Table 7 shows the semantic generalizations of the types of nouns in each class (Keet and Khumalo, 2014; Baertlein and Ssekitto, 2014; Kimenyi, 2004; Jeon et al., 2015; Zentz, 2016; Taraldsen, 2010; Mohlala, 2003; Katamba, 2003; Maho, 1999) .", "cite_spans": [ { "start": 330, "end": 354, "text": "(Keet and Khumalo, 2014;", "ref_id": "BIBREF19" }, { "start": 355, "end": 384, "text": "Baertlein and Ssekitto, 2014;", "ref_id": "BIBREF2" }, { "start": 385, "end": 399, "text": "Kimenyi, 2004;", "ref_id": "BIBREF21" }, { "start": 400, "end": 418, "text": "Jeon et al., 2015;", "ref_id": "BIBREF15" }, { "start": 419, "end": 431, "text": "Zentz, 2016;", "ref_id": "BIBREF39" }, { "start": 432, "end": 448, "text": "Taraldsen, 2010;", "ref_id": "BIBREF30" }, { "start": 449, "end": 463, "text": "Mohlala, 2003;", "ref_id": "BIBREF24" }, { "start": 464, "end": 478, "text": "Katamba, 2003;", "ref_id": "BIBREF17" }, { "start": 479, "end": 490, "text": "Maho, 1999)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 251, "end": 258, "text": "Table 7", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Description of Associated Nouns 1 and 2 People and kinship 3 and 4 Plants, nature, and some parts of the body 5 and 6", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 69, "text": "Nouns 1 and 2 People and kinship 3 and 4", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "Fruits, liquids, some parts of the body, and paired things 7 and 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "Inanimate The inability to detect the noun class of nouns with the same prefix but belonging to different classes (such as omuntu (person) in NC 1 and omuti (tree) in NC 3) is a big problem in Bantu language computational linguistics. This is because, as explained in Section 2, the noun class (NC) is at the heart of an extensive system of concordial agreement, and getting the NC wrong can result in incorrect noun pluralization, verb conjugation, as well as other parts -of-speech such as adjectives and possessives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "Comparing the semantic categories of nouns in Table 7 with the examples in Table 6 , it can be seen that omuntu and its related words, people terms, would belong to NC 1; the omuti group, plants, would fit in NC 3; the omukono group, parts of the body, can be split among NCs 3 and 5; and embwa, animals, can be placed in NC 9.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 53, "text": "Table 7", "ref_id": "TABREF13" }, { "start": 75, "end": 82, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "Existing approaches for surface realization in Runyankore (Byamugisha et al., 2016a (Byamugisha et al., , 2017b and isiZulu (Keet and Khumalo, 2014; annotate nouns with their noun class (NC) in order to solve the problem of having the same class prefix in different classes (see classes 1, 3, and 18 in Table 1 in Section 2). However, our results from word similarity evaluation show that a semantic distinction is made between people nouns (that are found in NC 1; see the omuntu example in Table 6 ) and other nouns starting with the omuprefix (see the omuti and omukono examples in Table 6 ).", "cite_spans": [ { "start": 58, "end": 83, "text": "(Byamugisha et al., 2016a", "ref_id": "BIBREF5" }, { "start": 84, "end": 111, "text": "(Byamugisha et al., , 2017b", "ref_id": "BIBREF8" }, { "start": 124, "end": 148, "text": "(Keet and Khumalo, 2014;", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 492, "end": 499, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 585, "end": 592, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "Finally, while the results on sentiment analysis are not spectacular, our work is, to the best of our knowledge, the first sentiment analysis module for Runyankore. Additionally, the results from the word similarity evaluation also show that different sentiments can be distinguished (see the rungi and rofa examples in Table 6 ).", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 327, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Noun Class", "sec_num": null }, { "text": "In this paper, we investigated how to generate a large and varied corpus to act as training data for a grammatically complex and computationally underresourced language, Runyankore. We found the use of a combined semantic and syntactic, patterngrammar-based approach to be applicable to Runyankore. Using this approach, we were able to generate one million labelled and unlabelled sentences, that were evaluated as correctly encoding related word semantics, and performing well when applied to a supervised machine learning task, sentiment analysis. Future work will involve identifying a qualitative evaluation for the dataset; manually labelling sentences from Orumuri for sentiment, in order to have an independent dataset to evaluate sentiment analysis, and investigating how the labelled data can be used together with the word similarity results to determine the noun class of a noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "isiZulu is a Bantu language indigenous to South Africa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Definiteness and Specificity in Runyankore-Rukiga", "authors": [ { "first": "Allen", "middle": [], "last": "Asiimwe", "suffix": "" } ], "year": 2014, "venue": "Stallenbosch University", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen Asiimwe. 2014. Definiteness and Specificity in Runyankore-Rukiga. Ph.D. thesis, Stallenbosch Uni- versity, Cape Town, South Africa.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning the structure of generative models without labeled data", "authors": [ { "first": "H", "middle": [], "last": "", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "He", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2017, "venue": "34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Stephen Bach, Bryan He, Alexander Ratner, and Christopher R\u00e9. 2017. Learning the structure of gen- erative models without labeled data. In 34th Inter- national Conference on Machine Learning (ICML 2017), Sidney, Austtralia. ArXiv.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Luganda nouns inflectional morphology and tests", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Baertlein", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Ssekitto", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth Baertlein and Martin Ssekitto. 2014. Lu- ganda nouns inflectional morphology and tests. Lin- guistic Portfolios, 3.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.04606" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ontology Verbalization in Agglutinating Bantu Languages: A Study of Runyankore and its Generalizability", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha. 2019. Ontology Verbalization in Agglutinating Bantu Languages: A Study of Run- yankore and its Generalizability. Ph.D. thesis, Uni- versity of Cape Town.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bootstrapping a runyankore cnl from an isizulu cnl", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Derenzi", "suffix": "" } ], "year": 2016, "venue": "5th Workshop on Controlled Natural Language (CNL 2016)", "volume": "9767", "issue": "", "pages": "25--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Brian DeRenzi. 2016a. Bootstrapping a runyankore cnl from an isizulu cnl. In 5th Workshop on Controlled Natural Language (CNL 2016), volume 9767, pages 25-36, Aberdeen, Scotland. Springer LNAI.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Tense and aspect in runyankore using a context-free grammar", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Derenzi", "suffix": "" } ], "year": 2016, "venue": "9th International Conference on Natural Language Generation (INLG 2016)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Brian DeRenzi. 2016b. Tense and aspect in runyankore using a context-free grammar. In 9th International Confer- ence on Natural Language Generation (INLG 2016), Edinburgh, Scotland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Evaluation of a runyankore grammar engine for healthcare messages", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Derenzi", "suffix": "" } ], "year": 2017, "venue": "10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Brian DeRenzi. 2017a. Evaluation of a runyankore grammar en- gine for healthcare messages. In 10th International Conference on Natural Language Generation (INLG 2017), Santiago de Compostela, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Toward an nlg system for bantu languages: first steps with runyankore (demo)", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Derenzi", "suffix": "" } ], "year": 2017, "venue": "10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Brian DeRenzi. 2017b. Toward an nlg system for bantu languages: first steps with runyankore (demo). In 10th Interna- tional Conference on Natural Language Generation (INLG 2017), Santiago de Compostela, Spain.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pluralizing nouns in agglutinating bantu languages", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Derenzi", "suffix": "" } ], "year": 2018, "venue": "27th International Conference on Computational Linguistics (COLING 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Brian DeRenzi. 2018. Pluralizing nouns in agglutinating bantu lan- guages. In 27th International Conference on Com- putational Linguistics (COLING 2018), Santa Fe, New Mexico, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pluralizing nouns in isizulu and related languages", "authors": [ { "first": "Joan", "middle": [], "last": "Byamugisha", "suffix": "" }, { "first": "C", "middle": [ "Maria" ], "last": "Keet", "suffix": "" }, { "first": "Langa", "middle": [], "last": "Khumalo", "suffix": "" } ], "year": 2016, "venue": "17th International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "9626", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Byamugisha, C. Maria Keet, and Langa Khumalo. 2016c. Pluralizing nouns in isizulu and related lan- guages. In 17th International Conference on Intel- ligent Text Processing and Computational Linguis- tics (CICLing 2016), volume 9626, Konya, Turkey. Springer LNCS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generating a training corpus for ocr post-correction using encoder-decoder model", "authors": [ { "first": "Cyril", "middle": [], "last": "Eva D'hondt", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Grouin", "suffix": "" }, { "first": "", "middle": [], "last": "Grau", "suffix": "" } ], "year": 2017, "venue": "8th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1006--1014", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eva D'hondt, Cyril Grouin, and Brigitte Grau. 2017. Generating a training corpus for ocr post-correction using encoder-decoder model. In 8th International Joint Conference on Natural Language Processing, pages 1006-1014, Taipei, Taiwan.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Developing text resources for ten south african languages", "authors": [ { "first": "Roald", "middle": [], "last": "Eiselen", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Puttkammer", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "3698--3703", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roald Eiselen and Martin J Puttkammer. 2014. Devel- oping text resources for ten south african languages. In LREC, pages 3698-3703.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Creating training corpora for NLG micro-planners", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "179--188", "other_ids": { "DOI": [ "10.18653/v1/P17-1017" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for NLG micro-planners. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 179-188, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep speech: Scaling up endto-end speech recognition", "authors": [ { "first": "Awni", "middle": [], "last": "Hannun", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Erich", "middle": [], "last": "Elsen", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "Shubho", "middle": [], "last": "Sengupta", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Y", "middle": [ "Andrew" ], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "Computational Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catan- zaro, Greg Diamos, Erich Elsen, Bryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, and Y. Andrew Ng. 2014. Deep speech: Scaling up end- to-end speech recognition. Computational Research Repository (CoRR), abs/1412.5567.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A basic sketch grammar of g\u00edk\u00fay\u00fa", "authors": [ { "first": "Lisa", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Li", "suffix": "" }, { "first": "Samantha", "middle": [], "last": "Mauney", "suffix": "" }, { "first": "Ana\u00ed", "middle": [], "last": "Navarro", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Wittke", "suffix": "" } ], "year": 2015, "venue": "Rice Working Papers in Linguistics", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Jeon, Jessica Li, Samantha Mauney, Ana\u00ed Navarro, and Jonas Wittke. 2015. A basic sketch grammar of g\u00edk\u00fay\u00fa. Rice Working Papers in Linguistics, 6.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bantu nominal morphology", "authors": [ { "first": "Francis", "middle": [], "last": "Katamba", "suffix": "" } ], "year": 2003, "venue": "The Bantu Languages: Routledge Language Family Series 4, chapter", "volume": "", "issue": "", "pages": "103--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis Katamba. 2003. Bantu nominal morphology. In The Bantu Languages: Routledge Language Fam- ily Series 4, chapter 7, pages 103-120. Taylor and Francis Routledge, London.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Verbalising owl ontologies in isizulu with python", "authors": [ { "first": "C", "middle": [ "M" ], "last": "Keet", "suffix": "" }, { "first": "M", "middle": [], "last": "Xakaza", "suffix": "" }, { "first": "L", "middle": [], "last": "Khumalo", "suffix": "" } ], "year": 2017, "venue": "14th Extended Semantic Web Conference (ESWC'17)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. M. Keet, M. Xakaza, and L. Khumalo. 2017. Verbal- ising owl ontologies in isizulu with python. In 14th Extended Semantic Web Conference (ESWC'17), Portoroz, Slovenia. Springer LNCS.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards verbalizing ontologies in isizulu", "authors": [ { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Keet", "suffix": "" }, { "first": "Langa", "middle": [], "last": "Khumalo", "suffix": "" } ], "year": 2014, "venue": "4th Workshop on Controlled Natural Languages (CNL'14)", "volume": "", "issue": "", "pages": "78--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Maria Keet and Langa Khumalo. 2014. Towards verbalizing ontologies in isizulu. In 4th Workshop on Controlled Natural Languages (CNL'14), pages 78-89, Galway, Ireland.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Grammar rules for the isizulu complex verb. Southern African Linguistics and Applied Language Studies", "authors": [ { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Keet", "suffix": "" }, { "first": "Langa", "middle": [], "last": "Khumalo", "suffix": "" } ], "year": 2017, "venue": "", "volume": "35", "issue": "", "pages": "183--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Maria Keet and Langa Khumalo. 2017. Grammar rules for the isizulu complex verb. Southern African Linguistics and Applied Language Studies, 35:183- 200.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Kinyarwanda morphology", "authors": [ { "first": "Alex", "middle": [], "last": "Kimenyi", "suffix": "" } ], "year": 2004, "venue": "Morphology: An International Handbook for Inflection and Word Formation", "volume": "17", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Kimenyi. 2004. Kinyarwanda morphology. In Geert Booij, Christian Lehmann, Joachim Mudgan, and Stavros Skopeteas, editors, Morphology: An In- ternational Handbook for Inflection and Word For- mation, volume 17.2. De Gruyter.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural text generation from structured data with application to the biography domain", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2016, "venue": "2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1203--1213", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with ap- plication to the biography domain. In 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1203-1213, Austin, Texas. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Comparative Study of Bantu Noun Classes", "authors": [ { "first": "Jouni", "middle": [], "last": "Maho", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jouni Maho. 1999. A Comparative Study of Bantu Noun Classes. Ph.D. thesis, Goteborg University, Goteborg, Sweden.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The bantu attribute noun class prefixes and their suffixal counterparts", "authors": [ { "first": "Linkie", "middle": [], "last": "Mohlala", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linkie Mohlala. 2003. The bantu attribute noun class prefixes and their suffixal counterparts, with special reference to zulu. Master's thesis, University of Pre- toria, Pretoria, South Africa.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "How we are using natural language generation to scale forge", "authors": [ { "first": "Jake", "middle": [], "last": "Neely", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jake Neely. 2018. How we are using natural language generation to scale forge.ai. Webpage.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Introduction", "authors": [ { "first": "Derek", "middle": [], "last": "Nurse", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "Philippson", "suffix": "" } ], "year": 2003, "venue": "The Bantu Languages: Routledge Language Family Series", "volume": "4", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derek Nurse and Gerard Philippson. 2003. Introduc- tion. In The Bantu Languages: Routledge Language Family Series 4, chapter 1, pages 1-9. Taylor and Francis Routledge, London.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A survey of the usages of deep learning in natural language processing", "authors": [ { "first": "Daniel", "middle": [ "W" ], "last": "Otter", "suffix": "" }, { "first": "Julian", "middle": [ "R" ], "last": "Medina", "suffix": "" }, { "first": "Jugal", "middle": [ "K" ], "last": "Kalita", "suffix": "" } ], "year": 2018, "venue": "Computing Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel W. Otter, Julian R. Medina, and Jugal K. Kalita. 2018. A survey of the usages of deep learning in natural language processing. Computing Research Repository (CoRR), abs/1807.10854.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Snorkel: Rapid training data creation with weak supervision", "authors": [ { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "H", "middle": [ "Stephen" ], "last": "Bach", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Ehrenberg", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Fries", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Christophher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2017, "venue": "VLDB Endowment (PVLDB)", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Ratner, H. Stephen Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christophher R\u00e9. 2017. Snorkel: Rapid training data creation with weak su- pervision. VLDB Endowment (PVLDB), 11(3).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Data programming: Creating large training sets, quickly", "authors": [ { "first": "J", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "Christopher M De", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Sa", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Selsam", "suffix": "" }, { "first": "", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems 29 (NIPS 2016)", "volume": "", "issue": "", "pages": "3567--3575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data pro- gramming: Creating large training sets, quickly. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 29 (NIPS 2016), pages 3567-3575. Curran Associates, Inc., Barcelona, Spain.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The nanosyntax of nguni noun class prefixes and concords", "authors": [ { "first": "", "middle": [], "last": "Knut Tarald Taraldsen", "suffix": "" } ], "year": 2010, "venue": "Lingua", "volume": "120", "issue": "6", "pages": "1522--1548", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knut Tarald Taraldsen. 2010. The nanosyntax of nguni noun class prefixes and concords. Lingua, 120(6):1522-1548.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Demonstrative determiners in runyankore-rukiga", "authors": [ { "first": "Doreen", "middle": [ "Daphine" ], "last": "Tayebwa", "suffix": "" } ], "year": 2014, "venue": "Master's thesis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doreen Daphine Tayebwa. 2014. Demonstrative deter- miners in runyankore-rukiga. Master's thesis, Nor- wegian University of Science and Technology, Nor- way.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A Simplified Runyankore-Rukiga-English Dictionary", "authors": [ { "first": "C", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Taylor. 2009. A Simplified Runyankore-Rukiga- English Dictionary. Fountain Publishers, Kampala, Uganda.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improving deep learning using generic data augmentation networks", "authors": [ { "first": "Luke", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Geoff", "middle": [], "last": "Nitschke", "suffix": "" } ], "year": 2017, "venue": "Computing Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Taylor and Geoff Nitschke. 2017. Improving deep learning using generic data augmentation net- works. Computing Research Repository (CoRR), abs/1708.06020.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Tense and aspect in runyankore-rukiga: Linguistic resources and analysis", "authors": [ { "first": "", "middle": [], "last": "Justus Turamyomwe", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justus Turamyomwe. 2011. Tense and aspect in runyankore-rukiga: Linguistic resources and analy- sis. Master's thesis, Norwegian University of Sci- ence and Technology, Norway.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Machine learning and applied linguistics. The Encyclopedia of Applied Linguistics", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "https://onlinelibrary.wiley.com/doi/abs/10.1002/9781405198431.wbeal1486" ] }, "num": null, "urls": [], "raw_text": "Sowmya Vajjala. 2018. Machine learning and applied linguistics. The Encyclopedia of Applied Linguis- tics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Multi-domain neural network language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Lina", "middle": [ "Maria" ], "last": "Mrksic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Rojas-Barahona", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "J. Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "15th Annual Conference of the North American Chapter of the Association for Computational Linguistics-Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "120--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-barahona, Pei-hao Su, David Vandyke, and J. Steve Young. 2016. Multi-domain neural network language generation for spoken di- alogue systems. In 15th Annual Conference of the North American Chapter of the Association for Com- putational Linguistics-Human Language Technolo- gies (NAACL-HLT), pages 120-129, San Diego, Cal- ifornia, USA. Association for Computational Lin- guistics (ACL), Association for Computational Lin- guistics (ACL).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gas\u00edc", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrks\u00edc", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1711--1721", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gas\u00edc, Nikola Mrks\u00edc, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Se- mantically conditioned lstm-based natural language generation for spoken dialogue systems. In 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1711-1721, Lisbon, Portu- gal. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A toolkit for generating sentences from context-free grammars", "authors": [ { "first": "Zhiwu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lixiao", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Haiming", "middle": [], "last": "Zhen", "suffix": "" } ], "year": 2011, "venue": "International Journal of Software and Informatics", "volume": "5", "issue": "", "pages": "659--676", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiwu Xu, Lixiao Zheng, and Haiming Zhen. 2011. A toolkit for generating sentences from context-free grammars. International Journal of Software and In- formatics, 5:659-676.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Forming Wh-Questions in Shona: A Comparative Bantu Perspective", "authors": [ { "first": "Jason", "middle": [], "last": "Zentz", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Zentz. 2016. Forming Wh-Questions in Shona: A Comparative Bantu Perspective. Ph.D. thesis, Yale University.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Text understanding from scratch", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Computing Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. Computing Research Repository (CoRR), abs/1502.01710.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "content": "", "html": null, "text": "", "num": null }, "TABREF2": { "type_str": "table", "content": "
: Verbal morphology of Runyankore; App: ap-
plicative, Cs: causative, Ps: passive, Rec: reciprocal,
Rev: reversive, Stv: stative, Itv: intensive, Red: redu-
plicative, Ism: instrumental
3 Approaches to Generating Textual
Training Corpora
", "html": null, "text": "", "num": null }, "TABREF3": { "type_str": "table", "content": "
SuperclassNoun Categories
abstractabstract give, abstract have, ab-
stract rw, abstract time, prop time
timeabstract time, prop time
foodfood fruit, food liquid, food plant,
food solid
kinskin, kin f, kin m
humanshuman, human f, human m, hu-
man med, human y,kins
animalanimal meat, animal plant, animal y
animalsanimal, humans
locloc in, loc out, prop loc
partpart animal, part plant
plantsplant, food fruit, food plant
non livingfood cook, food loc, thing cloth,
thing move, thing tool
livinganimals, plants
allliving, non living
<unclassified> illness, thing med
", "html": null, "text": "shows the classifications for the different categories.", "num": null }, "TABREF4": { "type_str": "table", "content": "", "html": null, "text": "The taxonomic groupings for the different noun categories From", "num": null }, "TABREF5": { "type_str": "table", "content": "
Verb Category Object Categories
ditransitiveall, all; illness, med; all, loc
intransitive
transitiveNouns: abstract, all, animal, food, hu-
man, illness, living, non living, part,
plant
transitiveVerbs: action, all
", "html": null, "text": "shows the object categories for the different verb categories.", "num": null }, "TABREF6": { "type_str": "table", "content": "", "html": null, "text": "", "num": null }, "TABREF7": { "type_str": "table", "content": "
Algorithm 4.1 An example of a simple generation
pattern
1: sc subject concord
2: Functions:getN oun(nounCategory),
getN ounClass(n),getV erbRoot(type),
getT ense(tenses),getObjectCategory(vr),
getObjectConcord(nc),
conjugateV erb(t, sc, oc, vr, fv)
3: n \u2190 getN oun(nounCategory) {Randomly obtain a
noun based on one of the categories in Table 3}
4: nc \u2190 getN ounClass(n)
Previous work shows
that it is possible to use a Context-Free Gram-
2 Orimuriisavailablefromhttps://www.
newvision.co.ug/new_vision/news/1044356/
orumuri
", "html": null, "text": "Variables: n noun, nc noun class, vr verb root, t tense, o object category, o object, oc object category, v conjugated verb,", "num": null }, "TABREF8": { "type_str": "table", "content": "
4.2 An example of a more complicated
generation pattern
1: adverb, sc subject concord
2: Functions:getN oun(nounCategory, s),
getN ounClass(n),getV erbRoot(type, s),
getT ense(tenses),getObjectCategory(vr),
getObjectConcord(nc),
conjugateV erb(t, sc, vr, fv),getSentiment(),
getAdjectivalRoot(s),getAdjective(nc, ar),
getAdverb()
3: s \u2190 getSentiment() {Randomly select from one of
the four sentiments}
4: n \u2190 getN oun(nounCategory, s) {Randomly obtain
a noun based on its sentiment and one of the categories in
Table 3}
5: nc \u2190 getN ounClass(n)
", "html": null, "text": "Variables: n noun, nc noun class, vr verb root, t tense, o object category, o object, oc object category, v conjugated verb, s sentiment, aj adjective, ar adjectival root, av", "num": null }, "TABREF9": { "type_str": "table", "content": "
TagMeaning
<NC Number>ac NC + Adjective concord
adjAdjective
advAdverb
augAugment
conjConjunction
contContinuous marker
extExtension
fvFinal vowel
infInfinitive
n<NC number>Noun + NC
<NC number>oc NC + object concord
<NC Number>pc NC + Possessive concord
hline primNegPrimary negative
secNegSecondary negative
<NC number>scNC + subject concord
tnTense marker
vVerb
", "html": null, "text": "that covers nine Bantu languages.", "num": null }, "TABREF10": { "type_str": "table", "content": "", "html": null, "text": "List of tags used to label morphological units and parts of speech", "num": null }, "TABREF11": { "type_str": "table", "content": "
", "html": null, "text": "Results from word similarity evaluation", "num": null }, "TABREF13": { "type_str": "table", "content": "
", "html": null, "text": "Classification of Bantu nouns into noun classes (the 'and' indicates that the two classes are a singular/plural pairing)", "num": null } } } }