{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:10.745125Z" }, "title": "Leveraging English Word Embeddings for Semi-Automatic Semantic Classification in N\u00eahiyaw\u00eawin (Plains Cree)", "authors": [ { "first": "Atticus", "middle": [ "G" ], "last": "Harrigan", "suffix": "", "affiliation": {}, "email": "atticus.harrigan@ualberta.ca" }, { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "", "affiliation": {}, "email": "arppe@ualberta.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper details a semi-automatic method of word clustering for the Algonquian language, N\u00eahiyaw\u00eawin (Plains Cree). Although this method worked well, particularly for nouns, it required some amount of manual postprocessing. The main benefit of this approach over implementing an existing classification ontology is that this method approaches the language from an endogenous point of view, while performing classification quicker than in a fully manual context. 1 There is one attempt at semantically classifying N\u00eahiyaw\u00eawin through automatic means found in Dacanay et al. (2021). This work makes use of similar techniques as desccribed in this paper, differing mainly in its mapping of N\u00eahiyaw\u00eawin words onto Wordnet classes.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper details a semi-automatic method of word clustering for the Algonquian language, N\u00eahiyaw\u00eawin (Plains Cree). Although this method worked well, particularly for nouns, it required some amount of manual postprocessing. The main benefit of this approach over implementing an existing classification ontology is that this method approaches the language from an endogenous point of view, while performing classification quicker than in a fully manual context. 1 There is one attempt at semantically classifying N\u00eahiyaw\u00eawin through automatic means found in Dacanay et al. (2021). This work makes use of similar techniques as desccribed in this paper, differing mainly in its mapping of N\u00eahiyaw\u00eawin words onto Wordnet classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Grouping words into semantic subclasses within a part of speech is a technique used widely throughout quantitative and predictive studies in the field of linguistics. Bresnan et al. (2007) use high level verb classes to predict the English dative alternation, Arppe et al. (2008) uses verb class as one of the feature sets to help predict the alternation of Finnish think verbs, and Yu et al. (2017) use polarity classifications (good vs bad) from pre-defined lexica such as WordNet (Miller, 1998) . In many cases, classifications within word classes allow researchers to group words into smaller cohesive groups to allow for use as predictors in modelling. Rather than using thousands individual lexemes as predictors, one can use a word's class to generalize over the semantic features of individual lexemes to allow for significantly more statistical power.", "cite_spans": [ { "start": 167, "end": 188, "text": "Bresnan et al. (2007)", "ref_id": "BIBREF3" }, { "start": 260, "end": 279, "text": "Arppe et al. (2008)", "ref_id": "BIBREF1" }, { "start": 383, "end": 399, "text": "Yu et al. (2017)", "ref_id": "BIBREF22" }, { "start": 483, "end": 497, "text": "(Miller, 1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While extensive ontologies of word classifications exist for majority languages like English (Miller, 1998) , German (Hamp and Feldweg, 1997) , and Chinese (Wang and Bond, 2013) , minority languages, especially lesser resourced languages in North America generally do not boast such resources. 1 Where such ontologies do exist, for ex-ample in Innu-aimun (Eastern Cree) (Visitor et al., 2013) , they are often manually created, an expensive process in terms of time. Alternatively, they may be based upon English ontologies such as WordNet. This opens the window to near-automatic ontology creation by associating definitions in a target language and English through a variety of methods. This is especially important, given the amount of time and effort that goes into manually classifying a lexicon through either an existing ontology (be it something like Rapidwords 2 or even Levin's like classes (Levin, 1993) ). Moreover, there is a motivation based in understanding a language and its lexicalization process on its own terms, though how to do this with a lesser resourced language remains unclear.", "cite_spans": [ { "start": 93, "end": 107, "text": "(Miller, 1998)", "ref_id": "BIBREF15" }, { "start": 117, "end": 141, "text": "(Hamp and Feldweg, 1997)", "ref_id": "BIBREF7" }, { "start": 156, "end": 177, "text": "(Wang and Bond, 2013)", "ref_id": "BIBREF19" }, { "start": 294, "end": 295, "text": "1", "ref_id": null }, { "start": 370, "end": 392, "text": "(Visitor et al., 2013)", "ref_id": "BIBREF18" }, { "start": 901, "end": 914, "text": "(Levin, 1993)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begun word classification in preparation for modelling a morpho-syntactic alternation in N\u00eahiyaw\u00eawin verbs. One hypothesis we developed for this alternation, based on Arppe et al. (2008) , is that the semantic classes of the verbs themselves as well as their nominal arguments would inform the verbal alternation. Due to constraints of time, we investigated methods to automatically classify both verbs and nouns in N\u00eahiyaw\u00eawin. Although statistical modelling remains the immediate motivator for the authors, semantic/thematic classifications have a wide range of benefits for language learners and revitalization, particularly in online lexicographic resources, where one may want to view all words to do with a theme, rather than simply finding translations of single English words.", "cite_spans": [ { "start": 170, "end": 189, "text": "Arppe et al. (2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In creating a framework for automatic semantic classification we make use of Word2vec (Mikolov et al., 2013a) word embeddings. Word embeddings are words represented by n-dimensional vectors. These vectors are ultimately derived from a word's context in some corpus through the Word2vec algorithm. Unfortunately, the Word2vec method is sensitive to corpus size. We initially attempted to create basic word and feature co-occurrence matrices based on a 140,000 token N\u00eahiyaw\u00eawin corpus (Arppe et al., 2020) to create word vectors using Principal Components Analysis, but in the end found the results to be not practically useful. Similarly, an attempt at both tf-idf and Word2Vec using only the N\u00eahiyaw\u00eawin dictionary produces mostly ill-formed groupings, though in these cases preprocessing by splitting verbs and nouns was not performed. Regardless, the poor performance was most certainly due simply to the paucity of data. Although the available corpora are small, N\u00eahiyaw\u00eawin does have several English-to-N\u00eahiyaw\u00eawin dictionaries, the largest being Wolvengrey (2001) . Although a bilingual N\u00eahiyaw\u00eawin-English dictionary, it is one formed from an Indigenous point of view, based on vocabulary from previous dictionaries, some of which have been compiled by N\u00eahiyaw\u00eawin communities from their own perspectives, or gleaned from a number of texts collections rather than attempting to find N\u00eahiyaw\u00eawin word matches for a pre-defined set of English words. This results in dictionary entries such as sakapw\u00eaw: it roasts over a fire (by hanging, with string on stick). Definitions such as this take into account the nuanced cultural understanding reflected in the word's morphology.", "cite_spans": [ { "start": 86, "end": 109, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF13" }, { "start": 484, "end": 504, "text": "(Arppe et al., 2020)", "ref_id": "BIBREF0" }, { "start": 1052, "end": 1069, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "To address the issue of corpus size, we attempted to bootstrap our classification scheme with pretrained English vectors in the form of the 3 million word Google News Corpus, which represents every word with a 300-dimensional vector. 3 We make use of the English definitions (sometimes also referred to as glosses) provided in Wolvengrey (2001) and fit to each word its respective Google News Corpus vector. This dictionary makes use of lemmas as headwords, and contains 21,717 entries. The presumption is that the real-world referents (at least in terms of denotation) of English and N\u00eahiyaw\u00eawin words are approximately comparable, in particular when taking the entire set of words in an English definition. Stop words were removed, and where content words were present in definitions in Wolvengrey (2001) but not available in the Google News Corpus, synonyms were used (one such example might be the word mit\u00eawin, which is unavailable in the corpus and thus would replaced with something like medicine lodge or deleted if a synonym was given in the definition as well). Because the Google News Corpus is based in American spelling, while Wolvengrey (2001) is based in Canadian spelling, American forms (e.g. color, gray) were converted into Canadian forms (e.g. colour, grey). If such preprocessing is not performed, these words are simply unavailable for clustering, as they lack a matching vector. 4 Where a N\u00eahiyaw\u00eawin word had more than one word sense, each sense was given a separate entry and the second entry was marked with a unique identifier. Finally, where needed, words in the N\u00eahiyaw\u00eawin definitions were lemmatized.", "cite_spans": [ { "start": 234, "end": 235, "text": "3", "ref_id": null }, { "start": 327, "end": 344, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" }, { "start": 789, "end": 806, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" }, { "start": 1140, "end": 1157, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" }, { "start": 1402, "end": 1403, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Once every word in Wolvengrey (2001) definitions matched an entry in the Google News Corpus, we associated each word in a N\u00eahiyaw\u00eawin definition with its respective Google News Vector. That is, given a definition such as aw\u00e2sisihk\u00e2nis: small doll, the resulting structure would be:", "cite_spans": [ { "start": 19, "end": 36, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "aw\u00e2sisihk\u00e2nis = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 0.159 0.096 \u22120.125 . . . \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 0.108 0.031 \u22120.034 . . . \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Because all word-vectors in the Google News Corpus are of the same dimensionality, we then took the resulting definition and averaged, per dimension, the values of all its constituent word-vectors. This produced a single 300-dimensional vector that acts as a sort of naive sentence vector for each of the English glosses/definitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "aw\u00e2sisihk\u00e2nis = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 0.134 0.064 \u22120.080 . . . \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Mikolov et al. (2013b) mention this sort of naive representation and suggests the use of phrase vectors instead of word vectors to address the representation of non-compositional idioms; however, given the way Wolvengrey (2001) 's definitions are written (e.g. with few idiomatic or metaphorical constructions), and for reasons of computational simplicity, we opted to use the above naive implementation in this paper. After creating the sentence (or English definition) vectors, we proceeded to cluster definitions with similar vectors together. To achieve this, we created a Euclidean distance matrix from the sentence vectors and made use of the hclust package in R (R Core Team, 2017) to preform hierarchical agglomerative clustering using the Ward method (based on the experience of (Arppe et al., 2008) in using the method to produce multiple levels of smaller, spherical clusters). This form of clustering is essentially a bottom-up approach where groupings are made by starting with individual labels with the shortest distance, then iteratively at a higher level making use of the clusters that result from the previous step or remaining individual levels; this second step is repeated until there is a single cluster containing all labels. This method of clustering creates a cluster tree that can be cut at any specified level after the analysis has been completed to select different numbers of clusters, allowing researchers some degree of flexibility without needing to rerun the clustering. This method is very similar to what has been done by both Arppe et al. (2008) , Bresnan et al. (2007) , and Divjak and Gries (2006) . The choice of what number of clusters was made based on an evaluation of the effectiveness of the clusters, based on an impressionistic overview by the authors.", "cite_spans": [ { "start": 210, "end": 227, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" }, { "start": 788, "end": 808, "text": "(Arppe et al., 2008)", "ref_id": "BIBREF1" }, { "start": 1564, "end": 1583, "text": "Arppe et al. (2008)", "ref_id": "BIBREF1" }, { "start": 1586, "end": 1607, "text": "Bresnan et al. (2007)", "ref_id": "BIBREF3" }, { "start": 1614, "end": 1637, "text": "Divjak and Gries (2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "For our purposes, we focused on the semantic classification of N\u00eahiyaw\u00eawin nouns and verbs. N\u00eahiyaw\u00eawin verbs are naturally morphosemantically divided into four separate classes: Intransitive verbs with a single inanimate argument (VII), Intransitive verbs with a single animate argument (VAI), transitive verbs with an animate actor 5 and an inanimate goal (VTI), and verbs with animate actors and goal (VTA). For verbs, clustering took place within each of these proto-classes. Among the VIIs, 10 classes proved optimal, VAIs had 25 classes, VTIs with 15 classes, and VTAs with 20 classes. The choice to preprocess verbs into these four classes was as not doing so resulted in a clus-tering pattern that focused mainly on the difference between transitivity and the animacy of arguments. Any more or fewer classes and HAC clusters were far less cohesive with obvious semantic units being dispersed among many classes or split into multiple classes with no obvious differentiation. Similarly, verbs were split from nouns in this process because definitions in Wolvengrey (2001) vary significantly between verbs and nouns.", "cite_spans": [ { "start": 1061, "end": 1078, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Nouns are naturally divided into two main classes in N\u00eahiyaw\u00eawin: animate and inanimate. 6 For our purposes we divide these further within each class between independent (i.e. alienable) and dependent (i.e. inalienable) nouns to create four main classes: Independent Animate Nouns (NA), Dependent Animate Nouns (NDA), Independent inanimate Nouns (NI), and Dependent Inanimate Nouns (NDI). The reason for this further division is due to the morphosemantic differences between independent and dependent nouns in N\u00eahiyaw\u00eawin. While independent nouns can stand on their own and represent a variety of entities, they are semantically and morphologically dependent on some possessor. We opted to pre-split NDIs and NDAs into their own classes, so as not to have the clustering focus on alienablity as the most major difference. 7", "cite_spans": [ { "start": 89, "end": 90, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "In all cases, clusters produced by this procedure needed some amount of post-processing. For nouns, this post-processing was minimal and mostly took the form of adjustments to the produced clusters: moving some items from one class to another, splitting a class that had clear semantic divisions, etc. For the verbs, this processing was often more complex, especially for the VAI and VTA classes. Items were determined to not belong in one class or another based on it's central meaning of the action or entity. If the majority of group members pertained to smoking (a cigarette), a word describing smokiing meat (as food preparation) would not be placed in this group, as the essence of the action and its intended purpose diverged significantly from the rest of the group. 6 Although this gender dichotomy is mostly semantically motivated (e.g. nouns that are semantically inanimate are part of the inanimate gender) this is not always the case as in the word pahkw\u00easikan, 'bread', a grammatically animate word.", "cite_spans": [ { "start": 775, "end": 776, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "7 Preliminary results for words not seperated by their conjugation class or declension did, in fact, create clusters based around these obvious differences. This likely due to the way definitions were phrased (e.g. dependent nouns would have a possessive determiner or pronoun).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Although most clusters produced somewhat cohesive semantic units, the largest clusters for the VAI and VTA classes acted as, essentially, catch-all clusters. Although computationally they seemed to have similar vector semantics, the relationship between items was not obvious to the human eye. Postprocessing for these clusters took substantial amounts of time and essentially comprised of using more cohesive clusters as a scaffold to fit words from these catch-all clusters into. In most cases, this resulted in slightly more clusters after postprocessing, though for VAIs this number was significantly higher, and for the NDIs it was slightly lower. Table 1 lists the number of cluster directly from HAC and from postprocessing.", "cite_spans": [], "ref_spans": [ { "start": 653, "end": 660, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Postprocessing grouped together words based on the most core semantic property of the word class: nouns were generally grouped based on the entity or state they represented, and verbs were generally grouped based on the most basic form action they represented. This is why, for example, AI-cover includes words for both covering and uncovering. In some cases a final class may seem like something that could be subsumed under another (e.g. AI-pray or AI-cooking might be understood as subsets of AI-action); however, in these cases, the subsumed class was judged to be sufficiently separate (e.g. cooking is an action of transforming resources into food for the purposes of nourishment, while verbs of AI-action are more manipulative, direct actions done for their own sake. Further, the automatic classification already grouped words in these ways, further justifying their separation. Finally, some grouping seem more morphosyntactic (e.g. AI-reflexive), though we argue that reflexivity, performing an action inwards, is in and of itself a salient semantic feature, and the inclusion of these terms into Wolvengrey (2001) indicates their lexicalization and distinction from the non-reflexive forms.", "cite_spans": [ { "start": 1107, "end": 1124, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The actual quality of clustering varied form class to class. In general, nouns resulted in much more cohesive clusters out-of-the-box and required far less postprocessing. For example, nearly all of the HAC class N I 14 items referred to parts of human bodies (and those that did not fit this description were terms clearly related to body parts like aspat\u00e2skwahpisowin, 'back rest'), N I 13 was made up of trapping/hunting words and words for nests/animals. The NA classes produced through HAC were similarly straightforward: N I 9 was made up of words for trees, poles, sticks, and plants; N I 8 was made up entirely of words form beasts of burden, carts, wheels, etc.; while much of N A 3 and N A 7 , and nearly all of N A 2 referred to other animals. Once manually postprocessed, the NA lexemes settled into 8 classes: NA-persons, NA-beast-of-burden, NA-food, NA-celestial, NA-body-part, NA-religion, NA-money/count, and NA-shield.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The NDI and NDA classes required almost no postprocessing: N DA 1 and N DA 3 were each made up of various family and non-family based relationships, while N DA 2 was made up of words for body parts and clothing. The resulting classes for these were: NDA-Relations, NDA-Body, and NDA-Clothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The NDI lexemes basically took two classes: the vast majority of NDI forms referred to bodies and body parts while two lexemes referred to the concept of a house, resulting in only two classes: NDI-body, and NDI-house.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Verbs, on the other hand, required quite a deal more postprocessing. VIIs showed the best clustering results without postprocessing. For example, V II 6 was entirely made up of taste/smell lexemes, V II 7 was almost entirely weather-related, V II 8 contained verbs that only take plural subjects, V II 9 had only lexemes referring to sound and sight, and V II 1 0 had only nominal-like verbs (e.g. m\u00eesiy\u00e2pisk\u00e2w '(it is) rust(y)'). Despite these well formed clusters, V II 1 through V II 5 were less cohesive and required manual clustering. In the end, distinct classes were identified: II-natural-land, II-weather-time, II-sensory-attitude, II-plural, II-move, II-time, and II-named. 8 Although postprocessing was required, this was not too substantial in scope or time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The VAIs required significantly more work. Some classes were well defined, such as V AI 23 whose members all described some sort of flight, but V AI 12 contains verbs of expectoration, singing, dancing, and even painting. The HAC classes were consolidated into 13 classes: AI-state, AI-action, AI-reflexive, AI-cooking, AI-speech, AI-collective, AI-care, AI-heat/fire, AI-money/count, AI-pray, AI-childcare, AI-canine, and AI-cover.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The VTIs similarly required manual postprocessing after HAC clustering. Although some classes such as V T I 11 (entirely to do with cutting or breaking) or V T I 14 (entirely to do with pulling) were very well formed, the majority of the classes needed further subdivision (though significantly less so than with the VAIs, resulting in the following 6 classes: TI-action, TI-nonaction, TI-speech, TI-money/counter, TI-fit, and TI-food.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Finally, the VTAs required a similar amount of postpreocessing as the VAIs. Although a few classes were well formed (such as V T A 4 which was entirely made up of verbs for 'causing' something), the vast majority of HAC classes contained two or more clear semantic groupings. Through manual postprocessing, the following set of classes were defined: VTA_allow, VTA_alter, VTA_body-position, VTA_care-for, VTA_cause, VTA_clothes, VTA_cognition, VTA_create, VTA_deceive, VTA_do, VTA_existential, VTA_food, VTA_hunt, VTA_miss/err, VTA_money, VTA_move, VTA_play, VTA_restrain, VTA_religious, VTA_seek, VTA_sense, VTA_speech, VTA_teach, VTA_tire, VTA_treat-a-way, VTA_(un)cover", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "In addition the above evaluation in the description of the manual scrutiny and adjustment of HAC results, which is in and of itself an evaluation of the technique presented in this paper (with singlesubject experimentation proposed as a rapid path to data for less-resourced languages such as Vietnamese (Pham and Baayen, 2015) ), we present a preliminary quantitative evaluation of this technique. This evaluation allows us to judge how useful these classes are in practical terms, providing an indirect measure of the informational value of the clusters. We make use of the mixed effects modelling that initially motivated automatic semantic clustering, focusing on a morphological alternation called N\u00eahiyaw\u00eawin Order, wherein a verb may take the form ninip\u00e2n (the Independent) or \u00ea-nip\u00e2y\u00e2n (the \u00ea-Conjunct), both of which may be translated as 'I sleep.' The exact details of this alternation remain unclear, though there appears to be some syntactic and pragmatic motivation (Cook, 2014) . Using R (R Core Team, 2017) and the lme4 package (Bates et al., 2015) , we ran a logistic regression to predict alternation using verbal semantic classes as categorical variables. In order to isolate the effect of semantic class, no other effects were used. The semantic classes were included as random effects. To assess the effectiveness of semantic class in this context, we assess the pseudo-R 2 value, a measure of Goodness-of-Fit. Unlike a regular R 2 measure, the pseudo-R 2 can not be interpreted as a direct measure of how much a model explains variance, and generally \"good\" pseudo-R 2 value are comparatively smaller (McFadden et al., 1973) , though a higher value still represents a better fit. As a general rule, a pseudo-R 2 of 0.20 to 0.40 represents a well fit model. 1977) 9 Models were fit for each of the four conjugation classes for both classes produced directly from the Hierarchical Agglomerative Clustering as well those manually adjusted. We used a subset of the Ahenakew-Wolfart Corpus (Arppe et al., 2020) , containing 10,764 verb tokens observed in either the Independent or \u00ea-Conjunct forms. The resulting pseudo-R 2 scores represent the way in which automatic and semi-manual clusters can explain the N\u00eahiyaw\u00eawin Order alternation. Table 2 presents the result of these analyses. the Manual column represents clusters that were manually adjusted, while the HAC-Only column represents the result of the logistic model that used only the fully automatic HAC-produced clusters. The manually adjusted and HAC-only classes performed similarly, especially for VTAs, though manual adjustment had a slightly worse fit for the VIIs, and conversely the VAI and VTI has somewhat significantly better fits using the manually adjusted classes. Although it appears that manual adjustment produced classes that were somewhat better able to explain this alternation, both manually adjusted and HAC-only clusters appear to explain a non-negligible degree of this alternation phenomenon in the above models. This is significant, because it shows that the result of the clustering techniques presented in this paper produce a tangible and useful product for linguistic analysis. Further, it suggests that, although manual classification was sometimes more useful, automatic classes more or less performed as well, allowing for researchers to determine if the added effort is worth the small increase in informational value. Nevertheless, alternative methods of evaluation, such as evaluating clusters based on speaker input, particularly through visual meas as described in Majewska et al. (2020) should be considered. 10 9 One can also compare the results in this paper with results from a similar alternation study in Arppe et al. (2008) .", "cite_spans": [ { "start": 304, "end": 327, "text": "(Pham and Baayen, 2015)", "ref_id": "BIBREF16" }, { "start": 979, "end": 991, "text": "(Cook, 2014)", "ref_id": "BIBREF4" }, { "start": 1043, "end": 1063, "text": "(Bates et al., 2015)", "ref_id": "BIBREF2" }, { "start": 1622, "end": 1645, "text": "(McFadden et al., 1973)", "ref_id": "BIBREF12" }, { "start": 2006, "end": 2026, "text": "(Arppe et al., 2020)", "ref_id": "BIBREF0" }, { "start": 3578, "end": 3600, "text": "Majewska et al. (2020)", "ref_id": "BIBREF10" }, { "start": 3724, "end": 3743, "text": "Arppe et al. (2008)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 2256, "end": 2263, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.1" }, { "text": "10 It is worth noting that previous attempts at such experi-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.1" }, { "text": "In general, the best clustering was seen in classes with fewer items. The VAI and NI lexemes required the most postprocessing, with each having roughly double the number of items as the next most numerous verb/noun class. Verb classes in general seemed to produce less cohesive classes through HAC. Although the exact cause of this discrepancy in unknown, it could perhaps be due to the way words are defined in Wolvengrey (2001) . In this dictionary, verb definitions almost always contain more words than noun definitions. Almost every single verb definition will have at least two words, owing to the fact that N\u00eahiyaw\u00eawin verbs are defined by an inflected lexeme. This means that if one looks up a word like walk, it would appear as: pimoht\u00eaw: s/he walks, s/he walks along; s/he goes along. Meanwhile, nouns tend to have shorter definitions. The definition for the act of walking, a nominalized form of the verb for walk, is written as: pimoht\u00eawin: walk, stroll; sidewalk. This difference is exacerbated by the fact that definitions are often translated fairly literally. Something like p\u00eayakw\u00eayimisow might be translated simply as 's/he is selfish,' but contains morphemes meaning one, think, reflexive, and s/he. A gloss of this word is seen in (1). Rather than simply defining the word as 's/he is selfish,' (Wolvengrey, 2001) has opted to provide a more nuanced definition: p\u00eayakw\u00eayimisow: s/he thinks only of him/herself, s/he is selfish, s/he is self-centered.", "cite_spans": [ { "start": 412, "end": 429, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" }, { "start": 1315, "end": 1333, "text": "(Wolvengrey, 2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "(1) p\u00eayakw\u00eayimisow p\u00eayakw-\u00eayi-m-iso-w one-think-VTA-RFLX-3SG 's/he thinks only of him/herself' The result of this complex form of defining is that words are defined more in line with how they are understood within the N\u00eahiyaw\u00eawin culture, which is indeed often manifested in the derivational morphological composition of these words. This is central to the motivation for this method of semiautomatic clustering, but produces verbs with relatively long definitions. An alternative explanation for why N\u00eahiyaw\u00eawin lexemes with English definitions consisting of more numerous parts of speech were more difficult to classify is that these divisions simply have significantly more variation in mentation via N\u00eahiyaw\u00eawin communities with which we have good relationships have been poorly received by speakers. meaning for whatever reason. Further investigation into this is needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Also worth noting is the relative distributions of each of the postprocessed classes mentioned above. Table 3 details each of the postprocessed noun classes sorted by their size.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Perhaps unsurprisingly, the distribution of lexemes into different classes followed a sort of Zipfian distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "The NA-person and NA-other-animals accounted for the vast majority of noun lexemes for animate nouns. Just under half of all NI lexemes were nominalized verbs, and roughly a quarter were smaller objectlike items (e.g. tools, dishes, etc.). The NDAs were almost entirely dominated by words for family, while all but three NDIs were body part lexemes. Some categories such as NI-scent, NI-days, and NA-shield have extremely low membership counts, but were substantially different from other categories that they were not grouped into another class. Most interestingly, there appeared to be three NI lexemes that referred to persons, something usually reserved for NAs only. These lexemes were okitaham\u00e2k\u00eaw 'one who forbids,' owiyasiw\u00eawikim\u00e2w 'magistrate,' and mihkokwayaw\u00eaw 'red neck.' In all three cases, the lexemes seem to be deverbal nouns (from kitaham\u00e2k\u00eaw 's/he forbids,' wiyasiw\u00eaw 's/he makes laws,' and mihkokwayaw\u00eaw 's/he has a red neck.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Verbs showed a similar distribution. Table 4 details the distribution of words within each of semantic classes for verbs. With the exception of VII and VAIs, verbs were dominated by classes for action, which subsumes most volitional actions (e.g. k\u00eeskihkw\u00eapisiw\u00eaw 's/he rips the face off of people,' k\u00e2s\u00eepayiw 's/he deletes'), and nonaction which includes most verbs of thought, emotion, judgment, or sensory action (e.g koskowih\u00eaw, 's/he startles someone,' n\u00f4c\u00eehkaw\u00eaw 's/he seduces someone'). Other classes may include action verbs, such as AI-cooking and TI-speech. Although these verbs could be classified in one of the two previously mentioned systems, their automatic classification and semantics unify them in a way that is unique to other items in these larger classes.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Overall, verb forms, especially the most numerous classes of VAI and VTA, required a large degree of manual postprocessing. Because this approach assumes no underlying ontology, but rather attempts to work bottom-up (cf. Hanks (1996) ), the time taken to postprocess VAI and VTA classes is likely not too far from what it would take to manually classify these words based off a prebuilt ontology; however, the appeal of a bottom-up classification should not be overlooked, however. As an example, many ontologies place concepts like thinking, and being happy into separate classes; however, in our classification these words were combined into a single class of cognition. This is done because emotion words like m\u00f4cik\u00eayihtam, 's/he is happy (because of something)' (in addition to being verbs and not adjectives) contain a morpheme, {-\u00eayi-}, meaning 'thought.' For these reasons, such emotion words are often translated as having to do specifically with thought and cognition: m\u00f4cik\u00eayihtam, 's/he thinks happily (because of something).' (Wolvengrey, 2001 ) uses these sorts of definitions, and so unsurprisingly the majority of such emotion words were classified in the proposed scheme together with words of thought. Where this was not the case, manual postprocessing from a bottom-up approach allows us to maintain the cultural understanding of emotions as directly related to cognition. Furthermore, from the experiential standpoint of one of the authors, the use of semi-automatic clustering produces a kick-start that greatly aids to the starting of a semantic classification task, especially for non-native speakers.", "cite_spans": [ { "start": 221, "end": 233, "text": "Hanks (1996)", "ref_id": "BIBREF8" }, { "start": 1038, "end": 1055, "text": "(Wolvengrey, 2001", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "This paper describes an attempt at, for the first time, semi-automatically classifying N\u00eahiyaw\u00eawin verbs and nouns. The process used in this paper is easily applied to any language that makes use of a bilingual dictionary with definitions written in a more resourced language. Resulting clusters of N\u00eahiyaw\u00eawin words are freely available on the online. Although the technique worked quite well with nouns, which required very little manual adjustment, verbs required more directed attention. Despite this, the technique presented in this paper offers a bottom-up, data-driven approach that takes the language on its own terms, without resorting to ontologies created primarily for other languages. If, however, one wishes to use a pre-defined ontology, the basis for this work (representing word definitions using pre-trained English word vectors) could be used in conjunction with existing ontologies to expedite the classification process. For example, Dacanay et al. (2021) compare the naive definition vectors for Wolvengrey (2001) with the same for the English WordNet word senses; word senses", "cite_spans": [ { "start": 955, "end": 976, "text": "Dacanay et al. (2021)", "ref_id": "BIBREF5" }, { "start": 1018, "end": 1035, "text": "Wolvengrey (2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "See http://rapidwords.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This corpus was trained on a large corpus of 100 billion words. Available at https://code.google.com/ archive/p/word2vec/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In reality, there were only a handful of cases where words occurred in the dictionary but not in the Google News Corpus. Because there are so few examples of this, even simply leaving these items out would not substantiqally change clustering results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As discussed inWolvengrey (2005), N\u00eahiyaw\u00eawin sentences are devoid subject and objects in the usual sense. Instead, syntactic roles are defined by verbal direction alignment. For this reason, we use the terms actor and goal instead of subject and object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The concepts of weather and time were combined here as many of the N\u00eahiyaw\u00eawin words for specific times also contain some concept of weather (e.g. the term for 'day' is k\u00eesik\u00e2w, clearly related to the word for 'sky/heavens', k\u00eesik; similarly, the word for 'night' is tipisk\u00e2w, which is the same word used for the night sky. Additionally, words like pipon, 'winter' and s\u00eekwan 'spring' are obviously related to both time and weather.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "NDI (N) NA (N) NDA (N) NI-nominal (1783) NDI-body (243) NA-persons (720) NDA-relations (143) NI-object 902NDI-house (2) NA-beast-of-burden (512) NDA-body (45) NI-natural-Force 283NA-food (325) NDA-clothing (4) NI-place 228NA-celestial (45) NI-nature-plants 198NA-body-part (37) NI-body-part 78NA-religion (23) NI-hunt-trap 60NA-money/count (12) NI-animal-product 48NA-shield (2) NI-religion (36) NI-alteration (23) NI-scent (4) NI-days (4) NI-persons (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NI (N)", "sec_num": null }, { "text": "We would like to thank the Social Sciences and Humanities Research Council of Canada for funding this research. We would also like to thank Dr. Arok Wolvengrey for providing his dictionary source for this study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": " 35AI-speech (131) TI-fit 10TA-money/count (23) II-named 3AI-collective 97TI-food 8TA-religion (9) AI-care 81TA-allow (5) AI-heat/fire (55) AI-money/count (34) AI-pray (29) AI-childcare (17) AI-canine (16) AI-cover (15) whose vectors bear a strong correlation with the N\u00eahiyaw\u00eawin definitions can then be assumed to be semantically similar with a N\u00eahiyaw\u00eawin word, and the latter can take the WordNet classification of the former. Further research should investigate more sophisticated methods of creating embeddings, especially the use of true sentence vectors. Additionally, one could consider using weights for English words in the definitions of N\u00eahiyaw\u00eawin words based on measures like tf-idf. Over all, this technique provided promising results. Regardless of the language or particular implementation, this technique of bootstrapping under-resourced language data with pre-trained majority language vectors (for which very large corpora exist), should not be restricted by the sizes of dictionaries in the under-resourced language, as the underlying vectors are trained on a 100 million word English corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A morphosyntactically tagged corpus for plains cree", "authors": [ { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Schmirler", "suffix": "" }, { "first": "G", "middle": [], "last": "Atticus", "suffix": "" }, { "first": "Arok", "middle": [], "last": "Harrigan", "suffix": "" }, { "first": "", "middle": [], "last": "Wolvengrey", "suffix": "" } ], "year": 2020, "venue": "Papers of the Forty-Ninth Algonquian Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antti Arppe, Katherine Schmirler, Atticus G Harrigan, and Arok Wolvengrey. 2020. A morphosyntactically tagged corpus for plains cree. In Papers of the Forty- Ninth Algonquian Conference. Michigan State Uni- versity Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Univariate, bivariate, and multivariate methods in corpus-based lexicography: A study of synonymy", "authors": [ { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antti Arppe et al. 2008. Univariate, bivariate, and mul- tivariate methods in corpus-based lexicography: A study of synonymy.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fitting linear mixed-effects models using lme4", "authors": [ { "first": "Douglas", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Martin", "middle": [], "last": "M\u00e4chler", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bolker", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2015, "venue": "Journal of Statistical Software", "volume": "67", "issue": "1", "pages": "1--48", "other_ids": { "DOI": [ "10.18637/jss.v067.i01" ] }, "num": null, "urls": [], "raw_text": "Douglas Bates, Martin M\u00e4chler, Ben Bolker, and Steve Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1):1- 48.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Predicting the dative alternation", "authors": [ { "first": "Joan", "middle": [], "last": "Bresnan", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Cueni", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Nikitina", "suffix": "" }, { "first": "R Harald", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 2007, "venue": "Cognitive foundations of interpretation", "volume": "", "issue": "", "pages": "69--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Bresnan, Anna Cueni, Tatiana Nikitina, and R Harald Baayen. 2007. Predicting the dative alter- nation. In Cognitive foundations of interpretation, pages 69-94. KNAW.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The clause-typing system of Plains Cree: Indexicality, anaphoricity, and contrast", "authors": [ { "first": "Clare", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clare Cook. 2014. The clause-typing system of Plains Cree: Indexicality, anaphoricity, and contrast, vol- ume 2. OUP Oxford.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Computational Analysis versus Human Intuition: A Critical Comparison of Vector Semantics with Manual Semantic Classification in the Context of Plains Cree", "authors": [ { "first": "Daniel", "middle": [], "last": "Dacanay", "suffix": "" }, { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "" }, { "first": "Atticus", "middle": [], "last": "Harrigan", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 4th Workshop on the Use of Computational Methods in the Study of Endangered Languages", "volume": "1", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dacanay, Antti Arppe, and Atticus Harrigan. 2021. Computational Analysis versus Human Intu- ition: A Critical Comparison of Vector Semantics with Manual Semantic Classification in the Context of Plains Cree. In Proceedings of the 4th Workshop on the Use of Computational Methods in the Study of Endangered Languages, volume 1, pages 33-43.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Ways of trying in russian: Clustering behavioral profiles. Corpus linguistics and linguistic theory", "authors": [ { "first": "Dagmar", "middle": [], "last": "Divjak", "suffix": "" }, { "first": "Stefan", "middle": [ "Th" ], "last": "Gries", "suffix": "" } ], "year": 2006, "venue": "", "volume": "2", "issue": "", "pages": "23--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagmar Divjak and Stefan Th Gries. 2006. Ways of try- ing in russian: Clustering behavioral profiles. Cor- pus linguistics and linguistic theory, 2(1):23-60.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic information extraction and building of lexical semantic resources for NLP applications", "authors": [ { "first": "Birgit", "middle": [], "last": "Hamp", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Feldweg", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birgit Hamp and Helmut Feldweg. 1997. Germanet-a lexical-semantic net for german. In Automatic infor- mation extraction and building of lexical semantic resources for NLP applications.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Contextual dependency and lexical sets", "authors": [ { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1996, "venue": "International journal of corpus linguistics", "volume": "1", "issue": "1", "pages": "75--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Hanks. 1996. Contextual dependency and lexi- cal sets. International journal of corpus linguistics, 1(1):75-98.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "English verb classes and alternations: A preliminary investigation", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beth Levin. 1993. English verb classes and alterna- tions: A preliminary investigation. University of Chicago press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Manual clustering and spatial arrangement of verbs for multilingual evaluation and typology analysis", "authors": [ { "first": "Olga", "middle": [], "last": "Majewska", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "4810--4824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olga Majewska, Ivan Vuli\u0107, Diana McCarthy, and Anna Korhonen. 2020. Manual clustering and spa- tial arrangement of verbs for multilingual evaluation and typology analysis. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 4810-4824.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Quantitative Methods for Analyzing Travel Behaviour of Individuals: Some Recent Developments", "authors": [ { "first": "Daniel", "middle": [], "last": "Mcfadden", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel McFadden. 1977. Quantitative Methods for An- alyzing Travel Behaviour of Individuals: Some Re- cent Developments. Technical report.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Conditional logit analysis of qualitative choice behavior", "authors": [ { "first": "Daniel", "middle": [], "last": "Mcfadden", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel McFadden et al. 1973. Conditional logit analy- sis of qualitative choice behavior.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "WordNet: An electronic lexical database", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1998. WordNet: An electronic lexical database. MIT press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Vietnamese compounds show an anti-frequency effect in visual lexical decision. Language, Cognition and Neuroscience", "authors": [ { "first": "Hien", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Harald", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 2015, "venue": "", "volume": "30", "issue": "", "pages": "1077--1095", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hien Pham and Harald Baayen. 2015. Vietnamese compounds show an anti-frequency effect in visual lexical decision. Language, Cognition and Neuro- science, 30(9):1077-1095.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing", "authors": [ { "first": "", "middle": [], "last": "R Core Team", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R Core Team. 2017. R: A Language and Environment for Statistical Computing. R Foundation for Statisti- cal Computing, Vienna, Austria.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Eastern james bay cree thematic dictionary (southern dialect)", "authors": [ { "first": "Linda", "middle": [], "last": "Visitor", "suffix": "" }, { "first": "Marie-Odile", "middle": [], "last": "Junker", "suffix": "" }, { "first": "Mimie", "middle": [], "last": "Neacappo", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Visitor, Marie-Odile Junker, and Mimie Nea- cappo. 2013. Eastern james bay cree thematic dic- tionary (southern dialect). Chisasibi: Cree School Board.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building the chinese open wordnet (cow): Starting from core synsets", "authors": [ { "first": "Shan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bond", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 11th Workshop on Asian Language Resources", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shan Wang and Francis Bond. 2013. Building the chi- nese open wordnet (cow): Starting from core synsets. In Proceedings of the 11th Workshop on Asian Lan- guage Resources, pages 10-18.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "N\u0113hiyaw\u0113win: itw\u0113wina = Cree: words", "authors": [ { "first": "Arok", "middle": [], "last": "Wolvengrey", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arok Wolvengrey. 2001. N\u0113hiyaw\u0113win: itw\u0113wina = Cree: words. University of Regina press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Inversion and the absence of grammatical relations in plains cree. Morphosyntactic expression in functional grammar", "authors": [ { "first": "Arok", "middle": [], "last": "Wolvengrey", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arok Wolvengrey. 2005. Inversion and the absence of grammatical relations in plains cree. Morphosyntac- tic expression in functional grammar, 27.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Refining word embeddings for sentiment analysis", "authors": [ { "first": "Liang-Chih", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Xuejie", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "534--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang-Chih Yu, Jin Wang, K Robert Lai, and Xuejie Zhang. 2017. Refining word embeddings for senti- ment analysis. In Proceedings of the 2017 confer- ence on empirical methods in natural language pro- cessing, pages 534-539.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "text": "HAC built cluster counts vs. counts after postprocessing", "content": "", "html": null }, "TABREF3": { "num": null, "type_str": "table", "text": "", "content": "
: pseudo-R 2 Values for Modelling Independent
vs. \u00ea-Conjunct Order Choice Based on Manual and Au-
tomatic Clustering Evaluation
", "html": null } } } }