ACL-OCL / Base_JSON /prefixA /json /americasnlp /2021.americasnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
102 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:13.368104Z"
},
"title": "Automatic Interlinear Glossing for Otomi language",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Barriga",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Nacional Aut\u00f3noma de M\u00e9xico (UNAM)",
"location": {}
},
"email": "dbarriga@ciencias.unam.mx"
},
{
"first": "Victor",
"middle": [],
"last": "Mijangos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Nacional Aut\u00f3noma de M\u00e9xico (UNAM)",
"location": {}
},
"email": "vmijangosc@ciencias.unam.mx"
},
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": "",
"affiliation": {
"laboratory": "URPP Language and Space",
"institution": "University of Z\u00fcrich",
"location": {}
},
"email": "ximena.gutierrezvasques@uzh.ch"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In linguistics, interlinear glossing is an essential procedure for analyzing the morphology of languages. This type of annotation is useful for language documentation, and it can also provide valuable data for NLP applications. We perform automatic glossing for Otomi, an under-resourced language. Our work also comprises the pre-processing and annotation of the corpus. We implement different sequential labelers. CRF models represented an efficient and good solution for our task (accuracy above 90%). Two main observations emerged from our work: 1) models with a higher number of parameters (RNNs) performed worse in our lowresource scenario; and 2) the information encoded in the CRF feature function plays an important role in the prediction of labels; however, even in cases where POS tags are not available it is still possible to achieve competitive results.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In linguistics, interlinear glossing is an essential procedure for analyzing the morphology of languages. This type of annotation is useful for language documentation, and it can also provide valuable data for NLP applications. We perform automatic glossing for Otomi, an under-resourced language. Our work also comprises the pre-processing and annotation of the corpus. We implement different sequential labelers. CRF models represented an efficient and good solution for our task (accuracy above 90%). Two main observations emerged from our work: 1) models with a higher number of parameters (RNNs) performed worse in our lowresource scenario; and 2) the information encoded in the CRF feature function plays an important role in the prediction of labels; however, even in cases where POS tags are not available it is still possible to achieve competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the important steps of linguistic documentation is to describe the grammar of a language. Morphological analysis constitutes one of the stages for building this description. Traditionally, this is done by means of interlinear glossing. This is an annotation task where linguists analyze sentences in a given language and they segment each word with the aim of annotating the morphosyntactic categories of the morphemes within this word (see example in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This type of linguistic annotated data is a valuable resource not only for documenting a language but it can also enable NLP technologies, e.g., by providing training data for automatic morphological analyzers, taggers, morphological segmentation, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, not all languages have this type of annotated corpora readily available. Glossing is a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "h\u00ed t\u00f3=tsog\u00ed Glossing NEG 3.PRF=leave Translation 'I have not left it' Table 1 : Example of morpheme-by-morpheme glosses for Otomi time consuming task that requires linguistic expertise. In particular, low-resource languages lack of documentation and language technologies (Mager et al., 2018) .",
"cite_spans": [
{
"start": 272,
"end": 292,
"text": "(Mager et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "Our aim is to successfully produce automatic glossing annotation in a low resource scenario. We focus on Otomi of Toluca, an indigenous language spoken in Mexico (Oto-Manguean family). This is a morphological rich language with fusional tendency. Moreover, it has scarcity of digital resources, e.g., monolingual and parallel corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "Our initial resource is a small corpus transcribed into a phonetic alphabet. We pre-process it and we perform manual glossing. Once we have this dataset, we use it for training an automatic glossing system for Otomi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "By using different variations of Conditional Random Fields (CRFs), we were able to achieve good accuracy in the automatic glossing task (above 90%), regardless the low-resource scenario. Furthermore, computationally more expensive methods, i.e., neural networks, did not perform as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "We also performed an analysis of the results from the linguistics perspective. We explored the automatic glossing performance for a subset of labels to understand the errors that the model makes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "Our work can be a helpful tool for reducing the workload when manually glossing. This would have an impact on language documentation. It can also lead to an increment of annotated resources for Otomi, which could be a starting point for developing NLP technologies that nowadays are not yet available for this language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "As we have mentioned before, glossing comprises describing the morphological structure of a sentence by associating every morpheme with a morphological label or gloss. In a linguistic gloss, there are usually three levels of analysis: a) the segmentation by morphemes; b) the glosses describing these morphemes; and c) the translation or lexical correspondences in a reference language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Several works have tried to automatize this task by using computational methods. In Snoek et al. (2014) , they use a rule-based approach (Finite State Transducer) to obtain glosses for Plains Cree, an Algonquian language. They focus only on the analysis of nouns. Samardzic et al. (2015) propose a method for glossing Chintang language; they divide the task into grammatical and lexical glossing. Grammatical glossing is approached as a supervised part-of-speech tagging, while for lexical glossing, they use a dictionary. A fully automatized procedure is not performed since word segmentation is not addressed.",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "Snoek et al. (2014)",
"ref_id": "BIBREF22"
},
{
"start": 264,
"end": 287,
"text": "Samardzic et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Some other works have approached the whole pipeline of automatic glossing as a supervised tagging task using machine learning sequential models, and they have particularly focused on underresourced languages (Moeller and Hulden, 2018; Anastasopoulos et al., 2018; Zhao et al., 2020) . In Anastasopoulos et al. (2018) , they make use of neural-based models with dual sources, they leverage easy-to-collect translations.",
"cite_spans": [
{
"start": 208,
"end": 234,
"text": "(Moeller and Hulden, 2018;",
"ref_id": "BIBREF16"
},
{
"start": 235,
"end": 263,
"text": "Anastasopoulos et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 264,
"end": 282,
"text": "Zhao et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 288,
"end": 316,
"text": "Anastasopoulos et al. (2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In Moeller and Hulden (2018) , they perform automatic glossing for Lezgi (Nakh-Daghestanian family) under challenging low-resource conditions. They implement different methods, i.e., CRF, CRF+SVM, Seq2Seq neural network. The best results are obtained with a CRF model that leverages POS tags. The glossing is mainly focused on tagging grammatical (functional) morphemes. While the lexical items are tagged simply as stems.",
"cite_spans": [
{
"start": 3,
"end": 28,
"text": "Moeller and Hulden (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "This latter approach especially influences our work. In fact, Moeller and Hulden (2018) highlight the importance of testing these models on other languages, particularly polysynthetic languages with fusion and complex morphonology. Our case of study, Otomi, is precisely a language highly fusional with complex morphophonological patterns, as we will discuss on Section 3.",
"cite_spans": [
{
"start": 62,
"end": 87,
"text": "Moeller and Hulden (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Finally, automatic glossing is not only crucial for aiding linguistic research and language documentation. This type of annotation is also a valu-able source of morphological information for several NLP tasks. For instance, it could be used to train state-of-the-art morphological segmentation systems for low-resource languages (Kann and Sch\u00fctze, 2018) . The information contained in the glosses is also helpful for training morphological reinflection systems (Cotterell et al., 2016) , this consists in predicting the inflected form of a word given its lemma. It also can help in the automatic generation of morphological paradigms (Moeller et al., 2020) .",
"cite_spans": [
{
"start": 329,
"end": 353,
"text": "(Kann and Sch\u00fctze, 2018)",
"ref_id": "BIBREF11"
},
{
"start": 461,
"end": 485,
"text": "(Cotterell et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 634,
"end": 656,
"text": "(Moeller et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "These morphological tools can then be used to build downstream applications, e.g., machine translation, text generation. It is noteworthy that these are language technologies that are not yet available for all languages, especially for under-resourced ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Otomi is considered a group of languages spoken in Mexico (around 300,000 speakers). It belongs to the Oto-Pamean branch of the Oto-Manguean family (Barrientos L\u00f3pez, 2004) . It is a morphologically rich language that shows particular phenomena (Baerman et al., 2019; Lastra, 2001 ):",
"cite_spans": [
{
"start": 148,
"end": 172,
"text": "(Barrientos L\u00f3pez, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 245,
"end": 267,
"text": "(Baerman et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 268,
"end": 280,
"text": "Lastra, 2001",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "\u2022 fusional patterns for the inflection of the verbs (it fuses person, aspect, tense and mood in a single affix);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "\u2022 a complex system of inflectional classes;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "\u2022 stem alternation, e.g., d\u00ed=p\u00e4di 'I know' and bi=mb\u00e4di 'He knew';",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "\u2022 complex morphophnological patterns, e.g., d\u00ed=p\u00e4di 'I know', d\u00ed=p\u00e4-h\u016b 'We know';",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "\u2022 complex noun inflectional patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "Furthermore, digital resources are scarce for this language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "We focus on the Otomi of Toluca variety. 1 Our starting point is the corpus compiled by Lastra (1992) , which is comprised of narrations and dialogues. The corpus was originally transcribed into a phonetic alphabet. We pre-processed this corpus, i.e., we performed digitization and orthographic normalization. 2 We used the orthographic standard proposed by INALI (INALI, 2014), although we had problems in processing the appropriate UTF-8 representations for some of the vocals (Otomi has a wide range of vowels).",
"cite_spans": [
{
"start": 88,
"end": 101,
"text": "Lastra (1992)",
"ref_id": "BIBREF12"
},
{
"start": 310,
"end": 311,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "The corpus, then, was manually tagged, 3 i.e., interlinear glossing and Part Of Speech (POS). We followed the Leipzig glossing rules (Comrie et al., 2008) . In addition to this corpus, we included 81 extra short sentences that a linguist annotated; these examples contained particularly difficult phenomena, e.g., stem alternation, reduction of the stem and others. Table 2 contains general information about the final corpus size.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Comrie et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3.1"
},
{
"text": "We also show in Table 3 the top ten most common POS tags and gloss labels in the corpus. We can see that the size of our corpus is small compared to the magnitude of resources usually used for doing in NLP in other languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Domain",
"sec_num": null
},
{
"text": "We focus on the two first levels of glossing, i.e., given an Otomi sentence, our system will be able to morphologically segment each word and gloss each of the morphemes within the words, as it is shown in the Example 1. Translation implies a different level of analysis and, due to the scarce digital resources, it is not addressed here. Similar to previous works, we use a closed set of labels, i.e., we have labels for all the grammatical (functional) morphemes and a single label for all the lexical morphemes (stem). We can see in the Example 1 that morphemes like tsog\u00ed ('leave') are labeled as stem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "(1) h\u00ed NEG t\u00f3=tsog\u00ed 3.PRF=stem Once we have a gloss label associated to each morpheme, we prepare the training data, i.e., we pair each letter with a BIO-label. BIO-labeling consists on associating each original label with a Beginning-Inside-Outside (BIO) label. This means that each position of a morpheme is declared either as the beginning (B) or inside (I). We neglected O (outside). BIO-labels include the morpheme category (e.g. B-stem) or affix glosses (e.g. B-PST, for past tense). For example, the labeled representation of the word t\u00f3tsog\u00ed would be as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "2) t B-3.PRF \u00f3 I-3.PRF t B-stem s I-stem o I-stem g I-stem \u00ed I-stem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "As we can see, BIO-labels help to mark the boundaries of the morphemes within a word, and they also assign a gloss label to each morpheme. We followed this procedure from Moeller and Hulden (2018). Once we have this labeling, we can train a model, i.e., predict the labels that indicate the morphological segmentation and the associated gloss of each morpheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "In this task, the input would be a string of characters c 1 , ..., c N and the output is another string g 1 , ..., g N which corresponds to a sequence of labels (from a finite set of labels), i.e., the glossing. In order to perform automatic glossing, we need to learn a mapping between the input and the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic glossing",
"sec_num": "3.2"
},
{
"text": "We approach the task of automatic glossing as a supervised structured prediction. We use CRFs for predicting the sequence of labels that represents the interlinear glossing. In particular, we used a linear-chain CRF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "The CRFs need to represent each of the characters from the input sentence as a vector. This is done by means of a feature function. In order to map the input sequence into vectors, the feature function need to take into account relevant information about the input and output sequences (features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "Feature functions play a major role in the performance of CRF models. In our case, we build these vectors by taking into account information about the current letter, the current, previous and next POS tags, beginning/end of words and sentences, letter position, and others (see Section 4.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "Let X = (c 1 , ..., c N ) be a sequence of characters representing the input of our model (a sentence), and Y = (g 1 , ..., g N ) the output (a sequence of BIO-labels). The CRF model estimates the probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "p(Y |X) = 1 Z N i=1 exp{w T \u03c6(Y, X, i)} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "Here Z is the partition function and w is the weights vector. \u03c6(Y, X, i) is the vector representing the ith element in the input sentence. This vector is extracted by the feature function \u03c6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "The features taken into account by the feature function depend on the experimental settings, we specify them below (Section 4.1). Training the model consists in learn the weights contained in w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "Following Moeller and Hulden (2018), we used CRFsuite (Okazaki, 2007) . This implementation uses the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization algorithm in order to learn the parameters of the CRF. Elastic Net regularization (consisting of L 1 and L 2 regularization terms) were incorporated in the optimization procedure.",
"cite_spans": [
{
"start": 54,
"end": 69,
"text": "(Okazaki, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2.1"
},
{
"text": "We explored three additional sequential models: 1) a traditional Hidden Markov Model; 2) a vanilla Recurrent Neural Network (RNN); and 3) a biLSTM model. Hidden Markov Model: A hidden Markov Model (HMM) (Baum and Petrie, 1966; Rabiner, 1989 ) is a classical generative graphical model which factorizes the joint distribution function into the product of connected components:",
"cite_spans": [
{
"start": 203,
"end": 226,
"text": "(Baum and Petrie, 1966;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 240,
"text": "Rabiner, 1989",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "p(g 1 , ..., g N , c 1 , ..., c N ) = N t=1 p(c t |g t )p(g t |g t\u22121 ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "This method calculates the probabilities using the Maximum Likelihood Estimation method. Likewise, the tagging of the test set is made with the Viterbi algorithm (Forney, 1973) . 4 Recurrent Neural Networks: In contrast with HMM, Recurrent Neural Networks are discriminative models which estimate the conditional probability p(g 1 , ..., g N |c 1 , ..., c N ) using recurrent layers. We used two types of recurrent networks:",
"cite_spans": [
{
"start": 162,
"end": 176,
"text": "(Forney, 1973)",
"ref_id": "BIBREF8"
},
{
"start": 179,
"end": 180,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "1. Vanilla RNN: For the vanilla RNN (Elman, 1990) the recurrent layers were defined as:",
"cite_spans": [
{
"start": 32,
"end": 49,
"text": "RNN (Elman, 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (t) = g(W [h (t\u22121) ; x (t) ] + b)",
"eq_num": "(3)"
}
],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "Here, x (t) is the embedding vector representing the character c t , t = 1, ..., N , in the sequence and [h (t\u22121) ; x (t) ] is the concatenation of the previous recurrent layer with this embedding vector.",
"cite_spans": [
{
"start": 118,
"end": 121,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "2. biLSTM RNN: The bidirectional LSTM (Hochreiter and Schmidhuber, 1997) or biL-STM uses different gates to process the recurrent information. However, it requires of a higher number of parameters to train. Each biLSTM layer is defined by:",
"cite_spans": [
{
"start": 38,
"end": 72,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (t) = biLST M (h (t\u22121) , x (t) )",
"eq_num": "(4)"
}
],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "h (t\u22121) = [ \u2212 \u2192 h (t\u22121) ; \u2190 \u2212 h (t\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "] is the concatenation of the forward and backward recurrent layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "In each RNN model we used one embedding layer previous to the recurrent layers in order to obtain vector representations of the input characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other sequential models",
"sec_num": "3.2.2"
},
{
"text": "For CRFs we propose three different experimental settings. 5 Each setting varies in the type of features that are taken into account. We defined a general set of features that capture different type of information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "1. the current character in the input sentence; 2. indication if the character is the beginning/end of word;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "3. indication if the word containing the character is the beginning/end of a sentence;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "4. the position of the character in the word;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "5. previous and next characters (character window);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "6. the current word POS tag, and also the previous and the next one; and 7. a bias term. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "To sum up, the CRF takes the information of the current character as input; but in order to obtain contextual information, we also take into consideration the previous and next character. Grammatical information is provided by the POS tag of the word in which the character appears. In addition to this, we add the POS tag of the previous and next words. These are our CRF settings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 CRF linear : This setting considers all the information available, i.e., the features that we mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 CRF P OSLess : In this setting we excluded the POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 CRF HM M Like : This setting takes into account the minimum information, i.e. information about the current letter and the immediately preceding one. We use this name because this configuration contains similar information as the HMMs but using CRFs to build them. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "As previously mentioned, we included other sequential methods for the sake of comparison, i.e., a simple Hidden Markov Model, which can be see as the baseline since it is the simpler model, and two neural-based models: a basic vanilla RNN and a biLSTM model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The embedding dimension was 100 units for both the vanilla RNN and the biLSTM models. 8 In both neural-based models we used one hidden, recurrent layer; the activation for the vanilla RNN was the hyperbolic tangent. The dimension of the vanilla and LSTM hidden layers was 200. 9 The features used in the CRF settings are implicitly taken into account by the neural-based models. Except for the POS tags, we did not include that information in the neural settings. In this sense, these last neural methods contain the same information as the CRF P OSLess setting.",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "8",
"ref_id": null
},
{
"start": 277,
"end": 278,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We evaluated our CRF-based automatic glossing models by using k-Fold Cross-Validation. We used k = 3 due to the small dataset size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "For the other sequential methods, we performed a hold-out evaluation. 10 In all cases we preserved the same proportion between training and test datasets (see Table 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "1180 Test 589 We report the accuracy, we also calculated the precision, recall and F1-score for every label in the corpus. Table 5 contains the results for all settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "We can see that the CRF based models outperformed the other methods in the automatic glossing task. Among the CRF settings, CRF HM M Like was the one with the lowest accuracy (and also precision and recall), this CRF used the least information/features, i.e., the current character of the input sentence and the previous emitted label. This is probably related to the fact that Otomi has a rich morphological system (with prefixes and suffixes), therefore, the lack of information about previous and subsequent characters affects the accuracy in the prediction of gloss labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "The CRF settings CRF P OSLess and the CRF linear are considerably better. The variations between these two settings is small, although the accuracy of CRF linear is higher. Interestingly, the lack of POS tags does not seem to affect the accuracy that much. If the glossing is still accurate (above 90%) after excluding POS tags, this could be convenient, especially in low-resource scenarios, where this type of annotation may not always be available for training the model. We do not know if this observation could be generalized to all languages. In the case of Otomi, the information encoded in the features could be good enough for capturing the morphological structure and word order that is important for predicting the correct label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "Additionally, we tried several variations on the hyperparameters of Elastic Net regularization (CRFs), however, we did not obtain significant improvements (see Appendix A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "The model that we took as the baseline, the HMM, obtained a lower performance compared to the CRF settings (0.878). However, if we take into consideration that HMM was the simpler model, its performance is surprisingly good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "The performance of CRF HM M Like is very similar to that of HMM. As we mentioned before, these two settings make use of the same information, but their approach is different: CRFs are discriminative while HMMs are generative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "The neural approaches that we implemented were not the most suitable for our task. They obtained the lowest accuracy, 0.741 for the vanilla RNN and 0.563 for the biLSTM. This result might seem striking, especially since neural approaches are by far the most popular nowadays in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instances (sentences) Train",
"sec_num": null
},
{
"text": "We have several conjectures that could explain why neural approaches were not the most accurate for our particular task. For instance, we observed that the performance of the RNN models (vanilla and biLSTM) was highly sensitive to the frequency of the labels. Both neural models performed better for high frequency labels (such as stem).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "In principle, the models that we used for automatic glossing have conceptual differences. HMMs are generative models, while CRFs and neural models are discriminative. This distinction, however, does not seem to influence the results. The HMM performed better than the neural-based models but it was outperformed by the CRFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "CRFs and neural networks mainly in the way they process the input data. While CRFs depend on the initial features selected by an expert, neural networks process a simple representation of the input data (one-hot vectors) through a series of hidden layers which rely on a large number of parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "The number of parameters is a key factor in neural networks, they usually have a large number of parameters that allows them to generalize well in complex tasks. For example, the biLSTM model has the highest number of parameters, while the vanilla RNN has a considerably reduced number of parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "However, theoretically, a model with higher capacity will also require a larger number of examples to generalize adequately (Vapnik, 1998) . The capacity on neural-based models depends on the number of parameters (Shalev-Shwartz and Ben-David, 2014). This could be problematic in terms of lowresource scenarios. In fact, in our experiments, the model with the highest number of parameters, the biLSTM, performed the worst. Models with fewer parameters, such as HMM and CRFs outperformed the neural-based models by a large margin.",
"cite_spans": [
{
"start": 124,
"end": 138,
"text": "(Vapnik, 1998)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "It is worth mentioning that we are aware that hyperparameters and other factors can strongly influence neural model's performance. There could be variations that result in more suitable solutions for this task. However, overall, this would probably represent a more expensive solution than using CRFs (or even a HMM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "Our results seem consistent with previous works for the same task where neural approaches fail to outperform CRFs in low-resource scenarios (Moeller and Hulden, 2018) .",
"cite_spans": [
{
"start": 140,
"end": 166,
"text": "(Moeller and Hulden, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "Complex models with many parameters might not be the most efficient solution in these types of low-resource scenarios. However, we leave this as an interesting research question for the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "Finally, our proposed models, CRF linear and the CRF P OSLess , seemed to be the best alternative for the task of automatic glossing of Otomi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs vs RNNs",
"sec_num": "5.1"
},
{
"text": "In this section we focus on the results from a more qualitative point of view. We discuss some inguistic particularities of Otomi and how they affected the performance of the models. We also present an analysis of how the best evaluated method, i.e. CRF linear , performed for a selected subset of gloss labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "As we mentioned in previous sections, the information comprised in the features seems to be decisive in the performance of the CRF models. When some of these features were removed, performance tended to decay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "For the correct labeling of Otomi morphology, contextual information (previous and next characters in the sentence) did have an impact in performance. This may be attributed to the presence of both prefixes and affixes in Otomi words. Stem alternation, for example, is conditioned by the prefixes in the word. Stem reduction is conditioned by the suffixes. In order to correctly label both stem and affixes, the system must consider the previous and next elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "There exist morphological or syntactic elements in the sentence that contributes to identify words category. For example, most of the nouns are preceded by a determiner (r\u012b , singular, or y\u012b , plural) . This kind of information is captured in the features and can help in the performance of the automatic glossing task.",
"cite_spans": [
{
"start": 169,
"end": 200,
"text": "(r\u012b , singular, or y\u012b , plural)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "Frequency of labels is a factor that influence the performance of the models. Labels with high frequency are better evaluated. For the neural-based models the impact of frequency was more pronounced. However, despite of the low-resource scenario we were able to achieve good results with the CRFs (above 90%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "Languages exhibit a wide range of complexity in their morphological systems. Otomi has several phenomena that may seem difficult to capture by the automatic models. However, even when languages have complex morphological systems, there are frequent and informative patterns (e.g. inflec-tional affixes) that can help to the recognition of them. This hypothesis is reflected in the low entropy conjecture (Ackerman and Malouf, 2013) , which concerns the organization of morphological patterns in order to make morphology learnable. This hypothesis points out that morphological organization seeks to reduce uncertainty. Table 6 presents the evaluation results for a subset of the labels used for the automatic glossing. These labels are linguistically interesting as there is a contrast between productive and unproductive elements.",
"cite_spans": [
{
"start": 404,
"end": 431,
"text": "(Ackerman and Malouf, 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "We can observe that labels like stem, 3.CPL (third person completive) or CTRF (counterfactual) were correctly labeled most of the time, as they were systematic and very frequent. Items like PRT (particle) had lower frequency, a lower recall and lower precision. The lower recall could be attributed to the fact that PRT is not systematic, i.e. multiple forms can take the same label. Therefore, it is more difficult to discriminate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "PRAG (pragmatic mark) appears only in verbs, and always in the same position (at the end of the word), this probably made this mark more easy to discriminate, thus, more easy to predict by the model. It is interesting that this morpheme was relatively frequent but it did not bear semantic information as it only provided discursive nuances (it can be translated as the filler word 'well').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "The 3.ICP (third person incompletive) label represents an aspect morpheme which is used very often since it is applied in the present tense and habitual situations. It always appears before the verb and in the same position, it seemed easier to predict. Therefore, this label has a high precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "The 3.PLS (third person pluscuamperfect) label also shows a systematic use before the verb; however, the latter did have a lower frequency on the corpus, what seems to have caused a lower recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "Otomi has two determiner morphemes: one for singular number (DET) and one for plural number (DET.PL). The one for the plural is clearly distinguished from other morphemes as it has the form y\u012b . However, for the singular number, the form is r\u012b which is the same as the form for the third person possessive (3.PSS). We believe that this fact made the label 3.PSS more prone to be incorrectly labeled (it showed a lower precision). In some cases, the model tended to incorrectly label the form r\u012b by preferring the most frequent label DET. This could explain the lower accuracy of 3.PSS compared to DET.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "In general, productive affixes were correctly labeled by our automatic system. This may represent a significant advantage in terms of aiding linguistic manual annotation. Productive and frequent morphemes may represent a repetitive annotation task that can be easily substituted by an automatic glossing system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "Even in the understanding that the glossing system is not 100% accurate, it is probably easier for a human annotator to correct problematic mislabels than to do all the process from scratch. In this sense, automatic glossing can simplify the task of manually glossing, and, therefore, it can help in the process of language documentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic perspective",
"sec_num": "5.2"
},
{
"text": "We focused on the task of automatic glossing for Otomi of Toluca, an indigenous language with complex morphological phenomena. We faced a lowresource scenario where we had to digitize, normalize and annotate a corpus available for this language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We applied a CRF based labeler with different variations in regard to the features that were taken into account by the model. Moreover, we included other sequential models, a HMM (baseline) and two RNN models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "CRFs outperfomed the baseline (HMM) but also the RNN models (Vanilla RNN and biLSTM). The CRF setting that took into account more information (encoded by the feature function) had the best performance. We also noticed that excluding POS tags do not seem to harm the system's performance that much. This could be an advantage since automatic POS tagging is a resource not always available for under resourced languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Furthermore, we provided a linguistically moti-vated insight of which labels were easier to predict by our system. Our automatic glossing labeler was able to achieve an accuracy of 96.2% (and 94.8% without POS tags). This sounds promising for reducing the workload when manually glossing. This can represent a middle step not only for strengthen language documentation but also for facilitating the creation of language technologies that can be useful for the speakers of Otomi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The following are the detailed results of the three different settings for CRF models. We report average accuracy score. The prefixes in the model names mean whether regularization terms L 1 and/or L 2 were configured.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "For example, the prefix reg means that both terms were present and conversely noreg means that no term is considered. Finally, l1_zero and l2_zero means if L 1 or L 2 term is equal to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "The variation of regularization parameters probed slight improvements between models of the same setting as can be showed in tables 7, 8 and 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Accuracy CRF HM M Like _l2_zero 0.8800 CRF HM M Like _reg 0.8760 CRF HM M Like _noreg 0.8710 CRF HM M Like _l1_zero 0.8707 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "CRF P OSLess _reg 0.9482 CRF P OSLess _l2_zero 0.9472 CRF P OSLess _l1_zero 0.9442 CRF P OSLess _noreg 0.9407 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": null
},
{
"text": "An Otomi language spoken in the region of San Andr\u00e9s Cuexcontitl\u00e1n, Toluca, State of Mexico. Usually regarded as ots (iso639).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The digitized corpus, without any type of annotation, can be consulted in https://tsunkua.elotl.mx/.3 The manual glossing of this corpus was part of a linguistics PhD dissertation (Mijangos, 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used Natural Language Toolkit (NLTK) for the HMM model.5 The code is available on https://github.com/ umoqnier/otomi-morph-segmenter/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The bias feature captures the proportion of a given label in the training set, i.e., it is a way to express that some labels are rare while others not.7 The maximum number of iterations in all cases was 50. 8 Both RNN models were trained in similar environments: 150 iterations, with a learning rate of 0.1 and Stochastic Gradient Descent (SGD) as optimization method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code for the neural-based models is available on https://github.com/VMijangos/Glosado_ neuronal10 We took this decision due to computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their valuable comments. We also thank the members of Comunidad Elotl for supporting this work. This work has been partially supported by the SNSF grant no. 176305",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological organization: The low conditional entropy conjecture. Language",
"authors": [
{
"first": "Farrell",
"middle": [],
"last": "Ackerman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "429--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farrell Ackerman and Robert Malouf. 2013. Morpho- logical organization: The low conditional entropy conjecture. Language, pages 429-464.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Part-of-speech tagging on an endangered language: a parallel Griko-Italian resource",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Marika",
"middle": [],
"last": "Lekakou",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Quer",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Zimianiti",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Debenedetto",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2529--2539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos, Marika Lekakou, Josep Quer, Eleni Zimianiti, Justin DeBenedetto, and David Chiang. 2018. Part-of-speech tagging on an endangered language: a parallel Griko-Italian re- source. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2529-2539, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inflectional class complexity in the otomanguean languages",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Baerman",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Palancar",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Feist",
"suffix": ""
}
],
"year": 2019,
"venue": "Amerindia",
"volume": "41",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Baerman, Enrique Palancar, and Timothy Feist. 2019. Inflectional class complexity in the oto- manguean languages. Amerindia, 41:1-18.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Otom\u00edes del Estado de M\u00e9xico. Comisi\u00f3n Nacional para el Desarrollo de los Pueblos Ind\u00edgenas",
"authors": [
{
"first": "Guadalupe",
"middle": [],
"last": "Barrientos L\u00f3pez",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guadalupe Barrientos L\u00f3pez. 2004. Otom\u00edes del Es- tado de M\u00e9xico. Comisi\u00f3n Nacional para el Desar- rollo de los Pueblos Ind\u00edgenas.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical inference for probabilistic functions of finite state markov chains. The annals of mathematical statistics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Baum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Petrie",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "37",
"issue": "",
"pages": "1554--1563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard E Baum and Ted Petrie. 1966. Statistical inference for probabilistic functions of finite state markov chains. The annals of mathematical statis- tics, 37(6):1554-1563.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The leipzig glossing rules: Conventions for interlinear morpheme-by-morpheme glosses",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Comrie",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
},
{
"first": "Balthasar",
"middle": [],
"last": "Bickel",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Comrie, Martin Haspelmath, and Balthasar Bickel. 2008. The leipzig glossing rules: Con- ventions for interlinear morpheme-by-morpheme glosses. Department of Linguistics of the Max Planck Institute for Evolutionary Anthropology & the Department of Linguistics of the University of Leipzig. Retrieved January, 28:2010.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The SIGMORPHON 2016 shared Task-Morphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task- Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 10-22, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finding structure in time",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cog- nitive science, 14(2):179-211.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The viterbi algorithm. Proceedings of the IEEE",
"authors": [
{
"first": "David",
"middle": [],
"last": "Forney",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "61",
"issue": "",
"pages": "268--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G David Forney. 1973. The viterbi algorithm. Proceed- ings of the IEEE, 61(3):268-278.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Njaua nt'ot'i ra h\u00f1\u00e4h\u00f1u. Norma de escritura de la lengua h\u00f1\u00e4h\u00f1u (Otom\u00ed)",
"authors": [
{
"first": "",
"middle": [],
"last": "Inali",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "INALI. 2014. Njaua nt'ot'i ra h\u00f1\u00e4h\u00f1u. Norma de es- critura de la lengua h\u00f1\u00e4h\u00f1u (Otom\u00ed). INALI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural transductive learning and beyond: Morphological generation in the minimal-resource setting",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3254--3264",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1363"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2018. Neural transductive learning and beyond: Morphological generation in the minimal-resource setting. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3254- 3264, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "El otom\u00ed de Toluca. Instituto de Investigaciones Antropol\u00f3gicas",
"authors": [
{
"first": "Yolanda",
"middle": [],
"last": "Lastra",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yolanda Lastra. 1992. El otom\u00ed de Toluca. Instituto de Investigaciones Antropol\u00f3gicas, UNAM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unidad y Diversidad de la Lengua: Relatos otom\u00edes",
"authors": [
{
"first": "Yolanda",
"middle": [],
"last": "Lastra",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yolanda Lastra. 2001. Unidad y Diversidad de la Lengua: Relatos otom\u00edes. UNAM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Challenges of language technologies for the indigenous languages of the Americas",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza-Ruiz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "55--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An\u00e1lisis de la flexi\u00f3n verbal del espa\u00f1ol y del otom\u00ed de Toluca a partir de un modelo implicacional de palabra y paradigma",
"authors": [],
"year": null,
"venue": "Instituto de Investigaciones Filol\u00f3gicas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00edctor Mijangos. 2021. An\u00e1lisis de la flexi\u00f3n verbal del espa\u00f1ol y del otom\u00ed de Toluca a partir de un modelo implicacional de palabra y paradigma. Ph.D. the- sis, Instituto de Investigaciones Filol\u00f3gicas, UNAM, Mexico City.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic glossing in a low-resource setting for language documentation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Moeller",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages",
"volume": "",
"issue": "",
"pages": "84--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Moeller and Mans Hulden. 2018. Automatic glossing in a low-resource setting for language docu- mentation. In Proceedings of the Workshop on Com- putational Modeling of Polysynthetic Languages, pages 84-93.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Igt2p: From interlinear glossed texts to paradigms",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Moeller",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Changbing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5251--5262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Moeller, Ling Liu, Changbing Yang, Katharina Kann, and Mans Hulden. 2020. Igt2p: From interlin- ear glossed texts to paradigms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5251-5262.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Crfsuite: a fast implementation of conditional random fields (crfs)",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A tutorial on hidden markov models and selected applications in speech recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Lawrence R Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "2",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 286.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic interlinear glossing as two-level sequence classification",
"authors": [
{
"first": "Tanja",
"middle": [],
"last": "Samardzic",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schikowski",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Stoll",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH)",
"volume": "",
"issue": "",
"pages": "68--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanja Samardzic, Robert Schikowski, and Sabine Stoll. 2015. Automatic interlinear glossing as two-level sequence classification. In Proceedings of the 9th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH), pages 68-72.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Understanding machine learning: From theory to algorithms",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shai Shalev-Shwartz and Shai Ben-David. 2014. Un- derstanding machine learning: From theory to algo- rithms. Cambridge university press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Modeling the noun morphology of plains cree",
"authors": [
{
"first": "Conor",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Dorothy",
"middle": [],
"last": "Thunder",
"suffix": ""
},
{
"first": "Kaidi",
"middle": [],
"last": "Loo",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Arppe",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Lachler",
"suffix": ""
},
{
"first": "Sjur",
"middle": [],
"last": "Moshagen",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Trosterud",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Workshop on the Use of Computational Methods in the Study of Endangered Languages",
"volume": "",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conor Snoek, Dorothy Thunder, Kaidi Loo, Antti Arppe, Jordan Lachler, Sjur Moshagen, and Trond Trosterud. 2014. Modeling the noun morphology of plains cree. In Proceedings of the 2014 Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 34-42.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Statistical Learning Theory",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Vapnik. 1998. Statistical Learning Theory. John Wiley & Sons.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic interlinear glossing for under-resourced languages leveraging translations",
"authors": [
{
"first": "Xingyuan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Satoru",
"middle": [],
"last": "Ozaki",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5397--5408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingyuan Zhao, Satoru Ozaki, Antonios Anastasopou- los, Graham Neubig, and Lori Levin. 2020. Auto- matic interlinear glossing for under-resourced lan- guages leveraging translations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5397-5408.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "General information about the Otomi corpus"
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "More frequent POS tags and gloss in corpus"
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Results for the different experimental setups"
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Results from the CRF linear model on a subset of the glossing labels"
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "CRF HM M Like setting results"
},
"TABREF10": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "CRF P OSLess setting results Accuracy CRF linear _reg 0.9624 CRF linear _l2_zero 0.9598 CRF linear _l1_zero 0.9586 CRF linear _noreg 0.9586"
},
"TABREF11": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "CRF linear setting results"
}
}
}
}