{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:37.972967Z" }, "title": "Sanskrit Segmentation Revisited", "authors": [ { "first": "Sriram", "middle": [], "last": "Krishnan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hyderabad", "location": {} }, "email": "sriramk8@gmail.com" }, { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hyderabad", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Computationally analyzing Sanskrit texts requires proper segmentation in the initial stages. There have been various tools developed for Sanskrit text segmentation. Of these, G\u00e9rard Huet's Reader in the Sanskrit Heritage Engine analyzes the input text and segments it based on the word parameters-phases like iic, ifc, Pr, Subst, etc., and sandhi (or transition) that takes place at the end of a word with the initial part of the next word. And it enlists all the possible solutions differentiating them with the help of the phases. The phases and their analyses have their use in the domain of sentential parsers. In segmentation, though, they are not used beyond deciding whether the words formed with the phases are morphologically valid. This paper tries to modify the above segmenter by ignoring the phase details (except for a few cases), and also proposes a probability function to prioritize the list of solutions to bring up the most valid solutions at the top.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "Computationally analyzing Sanskrit texts requires proper segmentation in the initial stages. There have been various tools developed for Sanskrit text segmentation. Of these, G\u00e9rard Huet's Reader in the Sanskrit Heritage Engine analyzes the input text and segments it based on the word parameters-phases like iic, ifc, Pr, Subst, etc., and sandhi (or transition) that takes place at the end of a word with the initial part of the next word. And it enlists all the possible solutions differentiating them with the help of the phases. The phases and their analyses have their use in the domain of sentential parsers. In segmentation, though, they are not used beyond deciding whether the words formed with the phases are morphologically valid. This paper tries to modify the above segmenter by ignoring the phase details (except for a few cases), and also proposes a probability function to prioritize the list of solutions to bring up the most valid solutions at the top.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Every Sanskrit sentence in the sa\u1e43hit\u0101 form (continuous sandhied text) is required to be segmented into proper morphologically acceptable words and the obtained result should agree with syntactic and semantic correctness for it's proper understanding. The obtained segmented text consists of individual words where even the compounds are segmented into their components. And there can be more than one segmentation for the same sa\u1e43hit\u0101 text. The segmented form does not provide any difference in the sense of the text when compared with the sa\u1e43hit\u0101 form except for the difference in the phonology of the words where it can be observed that the end part of the initial word together with the first letter of the next word undergoes phonetic change. The sa\u1e43hit\u0101 form, in fact, represents the text similar to a speech text because the knowledge transfer, in the olden days, was predominantly based on oral rendition. But now, for extracting information from these texts it is necessary that they be broken down into pieces so that the intention of the text is revealed completely without any ambiguity. In order to understand any Sanskrit text, this process of breaking down into individual words is necessary, and it is popularly known as sandhi-viccheda (splitting of the joint text) in Sanskrit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This process takes into account the morphological analyses of each of the split-parts obtained. As there is always a possibility for multiple morphological analyses even for individual words, considering only the morphological validation might result in enormous number of solutions for long sentences. So, syntactical accuracy is also measured to reduce the number of solutions. Even then, there is always a possibility for multiple solutions to remain, which cannot be resolved further without the semantic and contextual understanding of the sentence (Hellwig, 2009) . Owing to this, we find that there is non-determinism right at the start of linguistic analysis (Huet, 2009) , since sandhi splitting is the first step in the analysis of a Sanskrit sentence.", "cite_spans": [ { "start": 554, "end": 569, "text": "(Hellwig, 2009)", "ref_id": "BIBREF7" }, { "start": 667, "end": 679, "text": "(Huet, 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This kind of non-determinism is also found in languages like Chinese and Japanese, where the word boundaries are not indicated, and also in agglutinative languages like Turkish (Mittal, 2010) . In some of these languages like Thai (Haruechaiyasak et al., 2008) , most of the sentences have mere concatenation of words. Possible boundaries are predicted using the syllable information, and the process of segmentation starts with segmenting the syllable first, followed by the actual word segmentation. For Chinese though, their characters called hanzi are easily identifiable, and the segmentation could be done by tagging (Xue, 2003) , or determining the word-internal positions using machine learning or deep learning algorithms (like what is done in Ma et al. (2018) ). In the case of Vietnamese (Thang et al., 2008) , compound words are predominantly formed by semantic composition from 7000 syllables, which can also exist independently as separate words. This is similar to what can be observed in aluk sam\u0101sa in Sanskrit, which are rare in occurrence. For languages like English, French and Spanish where the boundaries are specifically observed as delimiters like space, comma, semi-colon, full stop, etc., segmentation is done using these delimiters and is comparatively simple.", "cite_spans": [ { "start": 177, "end": 191, "text": "(Mittal, 2010)", "ref_id": "BIBREF22" }, { "start": 231, "end": 260, "text": "(Haruechaiyasak et al., 2008)", "ref_id": "BIBREF6" }, { "start": 623, "end": 634, "text": "(Xue, 2003)", "ref_id": "BIBREF29" }, { "start": 753, "end": 769, "text": "Ma et al. (2018)", "ref_id": "BIBREF20" }, { "start": 799, "end": 819, "text": "(Thang et al., 2008)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In all the above cases, we find that either there are delimiters to separate the words, or individual words are joined by concatenation which ultimately rests the segmentation process in the identification of boundaries. In the case of Sanskrit though, these kinds of words form a very small percentage. Rather, there is the euphony transformation that takes place at every word boundary. This transition can be generally stated as u|v \u2192 w, where u is the final part of the first word, v the first part of the next word, and w the resultant form after combining u and v. Here the parts may contain at the most two phonemes. The resultant w may contain additional phonemes or may have elisions, but never are more than two phonemes introduced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "So this transition or sandhi (external) occurs only at the phoneme level, and it does not require any other information regarding the individual words used. 1 But the reverse process of segmentation does require a morphological analyzer to validate the segments in a split.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "And it is entirely up to the speaker or writer to perform these transitions or keep the words separated (called vivak\u1e63\u0101 -speaker's intention or desire). But in most of the texts and manuscripts, the sandhi is done throughout the text. So, finding the split location alone will not be enough to segment the texts properly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Having looked into some of the intricacies of sandhi in Sanskrit, we can come up with a mechanical segmentation algorithm that splits a given text into all possible segments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Traverse through the input text and mark all possible split locations which could be found in the list of sandhied letters. 2 2. When a sandhied letter is marked, then list all it's possible splits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Considering all the possible combinations of the words formed after each of these splits are allowed to join with the respective words (left word or right word), take each of the words, starting from the first word, to check for the morphological feasibility. Keep in mind that the words thus formed may also bypass the split locations, where they don't consider the split location present in between them. 4. If the word is morphologically correct, then consider it as a valid split word and move on to the next split location, and do step 3 until the last word of the sentence is reached. The sequence of words thus formed is the first solution. If the word is not morphologically correct, move to step 5. If all the words formed in a single split location, either on the left or on the right, or both, are not morphologically correct, then discard that split location and move to step 3 for the next location.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "5. Check the words formed from the subsequent splits and continue with steps 3 and 4 to obtain other solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "6. Trace back every split location, and perform step 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "7. In this way, get all the possible combinations of the split words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although this mechanical process looks quite simple, the previously mentioned issues like non-determinism do prevail. And systems like the Sanskrit Reader in Sanskrit Heritage Engine come up with better ways to try to account these problems. The current paper tries to update these efforts. It is organized as follows: Section 2 gives the update on how the segmentation for Sanskrit has been dealt with in recent years. Section 3 discusses the important features of The Sanskrit Heritage Engine's Reader. Section 4 explains in detail the issues present in the Reader. The modifications needed to be done, and the implementation for this paper compose Section 5. It also quotes theoretically the reasons for these modifications and provides the probability function proposed in this paper. Section 6 describes the methodology of the implementation, and the results and observations are in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Achieving the correct segmentation computationally is as much difficult as it is manually. A general approach would be the conversion of the mechanical sandhi splitting process mentioned in Section 1 to a working algorithm, followed by checking the statistics available for the frequencies of the words and transitions. But there has been a lot of better research work, both rule-based and statistical, on computational sandhi splitting in Sanskrit. Huet (2003) , as a part of the Sanskrit Heritage Engine, developed a Segmenter for Sanskrit texts using a Finite State Transducer. Two different segmenters were developed -one for internal sandhi, which is deployed in the morphological analyser, and the other for external sandhi. The current paper focuses on updating this external sandhi segmenter. Mittal (2010) had used the Optimality Theory to derive a probabilistic method, and de-veloped two methods to segment the input text (1) by augmenting the finite state transducer developed using OpenFst (Allauzen et al., 2007) , with sandhi rules where the FST is used for the analysis of the morphology and is traversed for the segmentation, and (2) used optimality theory to validate all the possible segmentations. Kumar et al. (2010) developed a compound processor where the segmentation for the compound words was done and used optimality theory with a different probabilistic method (discussed in section 5).", "cite_spans": [ { "start": 450, "end": 461, "text": "Huet (2003)", "ref_id": "BIBREF10" }, { "start": 801, "end": 814, "text": "Mittal (2010)", "ref_id": "BIBREF22" }, { "start": 1003, "end": 1026, "text": "(Allauzen et al., 2007)", "ref_id": "BIBREF0" }, { "start": 1218, "end": 1237, "text": "Kumar et al. (2010)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Current Methods", "sec_num": "2" }, { "text": "Natarajan and Charniak (2011) later modified the posterior probability function and also developed an algorithm based on Bayesian Word Segmentation methods with both unsupervised and supervised algorithms. Krishna et al. (2016) proposed an approach combining the morphological features and word co-occurrence features from a manually tagged corpus from Hellwig (2009) , and took the segmentation problem as a query expansion problem and used Path Constrained Random Walk framework for selecting the nodes of the graph built with possible solutions from the input.", "cite_spans": [ { "start": 206, "end": 227, "text": "Krishna et al. (2016)", "ref_id": "BIBREF15" }, { "start": 353, "end": 367, "text": "Hellwig (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Current Methods", "sec_num": "2" }, { "text": "Reddy et al. (2018) built a word segmenter that uses a deep sequence to sequence model with attention to predict the correct solution. This is the state of art segmenter with precision and recall as 90.77 and 90.3, respectively. IBM Research team (Aralikatte et al., 2018) , had built a Double Decoder RNN with attention as seq2(seq) 2 , where they have emphasized finding the locations of the splits first, and then the finding of the split words. And they have the accuracy as 95% and 79.5% for finding the location of splits and the split sentence, respectively.", "cite_spans": [ { "start": 247, "end": 272, "text": "(Aralikatte et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Current Methods", "sec_num": "2" }, { "text": "Hellwig and Nehrdich (2018) developed a segmenter using Character-level Recurrent and Convolutional Neural Networks, where they tokenize Sanskrit by jointly splitting compounds and resolving phonetic merges. The model does not require feature engineering or external linguistic resources. It works well with just the parallel versions of raw and segmented text. proposed a structured prediction framework that jointly solves the word segmentation and morphological tagging tasks in Sanskrit by using an energy based model which uses approaches generally employed in graph based parsing techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current Methods", "sec_num": "2" }, { "text": "The Sanskrit Heritage Engine's Segmenter was chosen for further development, for three reasons -1. It is the best segmenter available online with source code available under GPL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "2. It uses a Finite State Transducer, and hence the segmentation is obtained in linear time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "3. It can produce all possible segmentations that one can arrive at, following P\u0101\u1e47ini's rules for sandhi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "It analyses the given input and produces the split based on three main factors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "1. Morphological feasibility: whether each of the words observed as a split is morphologically obtainable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "2. Transition feasibility: whether every transition observed with each of the word is allowed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "3. Phase feasibility: whether the sequence of words have proper phase values. This is a constraint on the POS of a word. Although Sanskrit is a free word order language, there are certain syntactic constraints which govern the word formation, and the sequence of components within a word follows certain well defined syntax. The phase feasibility module takes care of this. Figure 1 shows a part of the lexical analyzer, developed by Goyal and Huet (2013) , that portrays these phases like Iic, Inde, Noun, Root, etc.", "cite_spans": [ { "start": 434, "end": 455, "text": "Goyal and Huet (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 374, "end": 382, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "Let us consider the sentence r\u0101m\u0101layo \u2032 sti, as an example to understand these factors. It can be observed that there are twelve possible split solutions given in Table 1 , from which, all the observed split words are shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 1", "ref_id": null }, { "start": 227, "end": 235, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "Other possible words like r\u0101, m\u0101laya\u1e25, etc. are not taken as proper splits because they do Solutions r\u0101ma (iic) \u0101laya\u1e25 (\u0101laya/\u0101li masc) asti r\u0101ma (iic) \u0101laya\u1e25 (\u0101li fem) asti r\u0101ma (iic) alaya\u1e25 (ali masc) asti r\u0101ma (iic) a (iic) laya\u1e25 asti r\u0101ma (iic) alaya\u1e25 (ali fem) asti r\u0101m\u0101 (fem) laya\u1e25 asti r\u0101m\u0101 (fem) \u0101laya\u1e25 (\u0101laya/\u0101li masc) asti r\u0101m\u0101 (fem) alaya\u1e25 (ali masc) asti r\u0101m\u0101 (fem) a laya\u1e25 asti r\u0101ma (r\u0101) \u0101laya\u1e25 (\u0101laya/\u0101li masc) asti r\u0101ma (r\u0101) alaya\u1e25 asti r\u0101ma (r\u0101) a (iic) laya\u1e25 asti Table 1 : List of solutions for the sentence r\u0101m\u0101layo \u2032 sti not form proper words according to the morphological analyzer present in the system. In this way, morphological feasibility is checked. In the same example, we find that, at the last possible split location represented by o \u2032 , we can split it as a\u1e25 and a but not in any other way. 4 This is ensured by the transition feasibility module.", "cite_spans": [], "ref_spans": [ { "start": 481, "end": 488, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "The phase details like iic for r\u0101ma or pr for asti, etc. are displayed along with the words. These assignments of the phase information to the words and their analysis are the jobs of the phase feasibility module. To understand these phases, look at Figure 3 (the first solution for the sentence r\u0101m\u0101layo \u2032 sti). r\u0101ma is the first split and has the phase iic. \u0101laya\u1e25 is the second split with two morphological possibilities -\u0101laya and \u0101li. And the transition between the first two words is -a | \u0101 \u2192 \u0101. The third split is asti with root as and phase pr. And the transition follows the equationa\u1e25 | a \u2192 o \u2032 . These transitions are taken care of by the transition feasibility module and the phases mentioned above are taken care of by the phase feasibility module.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 258, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "According to Goyal and Huet (2013) , sentences are formed by the image of the relation R (sandhi rules) on Kleene closure of W*, of a regular set W of words (vocabulary of inflected words). The Sanskrit Heritage Reader accepts a candidate sentence w, and applies the inverted form of the relation R, thus producing a set of words -w1, w2, w3,.... And each of the individual words are valid according to the rules of morphology, and their combination makes some sense.", "cite_spans": [ { "start": 13, "end": 34, "text": "Goyal and Huet (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "The methodology followed in the Segmenter proposed in Goyal and Huet (2013) starts with using the finite state transducer for generating the chunks, instead of the traditional recursive method over the sentence employed in many sandhi splitting tools. The FST considers the phases as important characteristics of the words. These phases correspond to a finite set of forms.", "cite_spans": [ { "start": 54, "end": 75, "text": "Goyal and Huet (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "To understand how a word is obtained, let us first take a small example of how the substantival forms (subantas) are obtained. A subanta is analysed as a nominal stem followed by a suffix. The nominal stem can be either an underived stem or a derived stem. In case of a derived stem, the derivation of this stem is also provided by the segmenter. A compound, for example, has a derived stem which contains a sequence of components followed by a nominal suffix. And three phases are present to represent the subantas:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "1. Noun, that contains declined forms of autonomous atomic substantive and adjective stems, from the lexicon 2. Ifc, non-autonomous and used as righthand component of a compound", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heritage Segmenter", "sec_num": "3" }, { "text": "This sequence of Subst \u2192 Noun \u2192 Accept, creates a noun word. And the sequence of Subst \u2192 Iic + \u2192 Ifc \u2192 Accept, creates a compound word. These sequences can be observed in Figure 1 . In this way, forms from these phases are selected, and gluing them with sandhi rules, a word is obtained. Considering all such possible phases in Sanskrit, an automation transition graph is formed and is used to traverse through to find the possible split locations and words together.", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 179, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Iic contains bare stems of nouns to be used as left component", "sec_num": "3." }, { "text": "The Segmenter is embedded in the Sanskrit Reader which displays all the outputs with the corresponding split word, it's phase and the transition involved with the subsequent word, except when the number of outputs is huge, in which case it shows only the summary. The Reader shows the distinction between words and phases based on verb, noun, iic, inde, etc, but not between some of the case-markers. So, there is inconsistency in disambiguation: sometimes the phase is used for pruning out certain solutions, but in some cases, it is not. For example, r\u0101movana\u1e45gacchati produces the following 4 solutions: The segmenter provides segmentation and also does partial disambiguation. For example, r\u0101ma\u1e25 is ambiguous morphologically and the machine has correctly disambiguated the alternatives. We see the noun analysis of it in the first and second solutions, and the verbal analysis in third and fourth solutions. But we notice that the word vanam which is ambiguous between two morphological analyses, one with nominative case marker and the other with accusative marker, is not disambiguated. Goyal and Huet (2013) mention that the consideration of a word's similar declensions as different might result in more ambiguity, and the purpose of the segmentation is to find the morphologically apt words and hence they are taken as one.", "cite_spans": [ { "start": 1093, "end": 1114, "text": "Goyal and Huet (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Issues in Heritage Segmenter", "sec_num": "4" }, { "text": "If we look at these four solutions, at the word level, all of them correspond to r\u0101-ma\u1e25 vanam gacchati. In order to decide the correct solution among the four, we need to do syntactico-semantic analyses that depend solely upon the linguistic or grammatical information in the sentence (Kulkarni, 2013) .", "cite_spans": [ { "start": 285, "end": 301, "text": "(Kulkarni, 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Issues in Heritage Segmenter", "sec_num": "4" }, { "text": "We notice that the use of phase information results in multiple solutions. In order to choose the correct solution among them, one needs to look beyond the word analysis and look at the possible relations between the words. This is the domain of the sentential parser. Only a sentential parser can decide which of the segmentations with phase information is the correct one. Thus we do not see any advantage of having the phase information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "And in the interface, the system is not uniform in resolving the ambiguities. It uses certain morphologically different phases under a single word, like vanam in section 4. Additionally, in the options for selecting or rejecting the words, sometimes the depth of the graph goes so deep that, there is a chance to miss some solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "Here we would like to mention that some of the phase information is still relevant for segmentation. And this corresponds to the compounds. The phase information tells if something is a component of a compound or a standalone noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "There are a few phases such as iic, iif, etc. that we do not ignore. Barring these we ignore all other phases. Therefore, we propose the following modifications in the segmenter:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "1. Ignore the phase information that is irrelevant from segmentation point of view and merge the solutions that have the same word level segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "2. Prioritize the solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "This is similar to the intention in Reddy et al. (2018) where the morphological and other linguistic details are not obtained, but the segmentation problem is seen as an end in itself. This is also similar to what Huet (2009) did as an update for Gillon (2009) to the compound analyzer where Gillon (2009) uses the dependency structure to get the tree form consisting of all the parts of the compound word. And Huet (2009) made the lexical analyzer to understand the compound as a right recursive linear structure of a sequence of components. This made sure that only the compound components are obtained, and not their relationship with each other. This helps in easier and faster segmentation, but the next level syntactic analysis cannot be done without the relationship information of the components. Similarly, the same approach has been extended to all words, and not just compound words, and the phase details are not considered as valid parameters to distinguish solutions. Such solutions were termed duplicates and hence removed.", "cite_spans": [ { "start": 36, "end": 55, "text": "Reddy et al. (2018)", "ref_id": "BIBREF24" }, { "start": 214, "end": 225, "text": "Huet (2009)", "ref_id": "BIBREF12" }, { "start": 247, "end": 260, "text": "Gillon (2009)", "ref_id": "BIBREF2" }, { "start": 292, "end": 305, "text": "Gillon (2009)", "ref_id": "BIBREF2" }, { "start": 411, "end": 422, "text": "Huet (2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "Once the duplicates are removed, prioritization needs to be done. Many probabilistic measures have been proposed in the past to prioritize the solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "Mittal (2010) calculated the weight for a specific split s j as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "W sj = ( \u220f m\u22121 i=1 (P (c i ) +P (c i+1 )) \u00d7P (r i )) m (1) whereP (c i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "is the probability of the occurrence of the word c i in the corpus.P (r i ) is the probability of the occurrence of the rule r i in the corpus. And m is the number of individual components in the split s j . Kumar et al. (2010) uses the weight of the split s j as", "cite_spans": [ { "start": 208, "end": 227, "text": "Kumar et al. (2010)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W sj = ( \u220f m i=1P (c i )) \u00d7 ( \u220f m\u22121 i=1P (r i )) m", "eq_num": "(2)" } ], "section": "Proposed Modification", "sec_num": "5" }, { "text": "Natarajan and Charniak (2011) proposed a posterior probability function,P (s), the probability of generating the split s = \u27e8c 1 ...c m \u27e9, with m splits, and rules r = \u27e8r 1 , ..., r m\u22121 \u27e9 applied on the input, wher\u00ea", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (s) =P (c 1 )\u00d7P (c 2 |c 1 )\u00d7P (c 3 |c 2 , c 1 )\u00d7... (3) P (s) = m \u220f j=1P (c j )", "eq_num": "(4)" } ], "section": "Proposed Modification", "sec_num": "5" }, { "text": "P (c 1 ) is the probability of occurrence of the word c 1 .P (c 2 |c 1 ) is the probability of occurrence of the word c 2 given the occurrence of the word c 1 , and so on. Mittal (2010) and Kumar et al. (2010) follow the GEN-CON-EVAL paradigm attributed to the Optimality Theory. This paper considers a similar approach but the probability function is taken as just the POP (product-of-products) of the word and transition probabilities of each of the solutions, discussed in section 6.", "cite_spans": [ { "start": 172, "end": 185, "text": "Mittal (2010)", "ref_id": "BIBREF22" }, { "start": 190, "end": 209, "text": "Kumar et al. (2010)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "And to prioritize the solutions, the following statistical data was added from the SHMT Corpus: 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "\u2022 sam\u0101sa words with frequencies \u2022 sandhi words with frequencies \u2022 sam\u0101sa transition types with frequencies \u2022 sandhi transition types with frequencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Modification", "sec_num": "5" }, { "text": "Every solution obtained after segmentation is checked for the two details viz. the word and the transition (that occurs at the end of the word due to the presence of the next word), along with the phase detail that is checked only for those which correspond to the components of a compound. For every solution s, with output as s = \u27e8w 1 .w 2 ....w n \u27e9 a confidence value, C i , is obtained which is the product of the products of transition probablility (P ti ) and word probability (P wi ) for the word w i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "C i = n \u220f i=1 P wi \u00d7 P ti (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "The confidence value is obtained as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 For every split word w i , it's phase is checked to know whether the obtained word forms a compound or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 If it is a compound word, then it's corresponding frequency is obtained from compound words' statistical data, to calculate the word_probability, P (w i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 If it is not a compound word, then corresponding frequency is obtained from the sandhi words' statistical data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 For every transition associated with the word, the transition's corresponding frequency is obtained from either the sam\u0101sa transition data, or the sandhi transition data, based on the phase of the word; to calculate the transition_probability, P (t i ). 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 The confidence value for the word, w i is thus obtained as the product of word_probability and transi-tion_probability word_probability \u00d7 transition_probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C i = P w i \u00d7 P t i", "eq_num": "(6)" } ], "section": "Methodology", "sec_num": "6" }, { "text": "\u2022 Finally the product of all such products was obtained for a single solution as the confidence value of the solution -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C total = n \u220f i=1 P w i \u00d7 P t i", "eq_num": "(7)" } ], "section": "Methodology", "sec_num": "6" }, { "text": "The solutions are then sorted as decreasing order of confidence values and the duplicates are removed based on only the word splits. The remaining solutions are displayed along with their number and confidence values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "6" }, { "text": "The test data contained on the whole 21,127 short sandhied expressions, which were taken from various texts available at the SHMT corpus. This data was a parallel corpus of sandhied and unsandhied expressions. In case there are more than one segmentation possible, only one segmentation that was appropriate in the context where the sandhied expression was found is recorded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "The above data was fed to both the old and the modified segmenters. The results of the old segmenter were used as the baseline. A comparison was done on how the updated system performed with respect to the old system. The correct solution's position in the old segmenter was compared with the correct solution's position in the updated segmenter. Table 2 summarizes the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "The old segmenter was able to correctly produce the segmented form in 19,494 cases out of the 21,127 instances. Of these, 53.51% of the solution was found to be in the first position, 12% in second position, and 9.61% in the third. All put together, 75.12% of the correct solutions were found in the top three solutions. Another important observation was that, the entire number of solutions taken all together was 2,40,942 for 21,127 test instances and the average number of solutions was 11.4 with the correct solution's position averaging at 4.71.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "The modified segmenter was able to correctly produce the segmented form in 19,494 cases, same as the old segmenter. And 89.27% of the solution was found to be in the first position, 6.83% in the second position, and 2.2% in the third. All put together, 98.3% of the correct solutions were found in the top three solutions. This has an increase of 23.18% from the existing system. Also, the entire number of solutions taken all together was 1,46,610 for 21,127 test instances, having a drastic reduction of 94,332 solutions. The average number of solutions was 6.94, with the correct solution's position averaging at 1.18.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "It can be noted that the overall Recall was 0.92270554267 for both the machines. Since only the statistics have been altered, the new system doesn't provide new solutions. Rather, it has increased the chances of getting the solution at the top three by 23.18%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "As we observe, the updated system reduces the total amount of solutions and brings up the most likely solutions. Also, we have more than 90% recall in both the cases. The missed out instances were either due to morphological unavailability or owing to the failure of the engine. Once the morphological analyzer is updated, there will definitely be a boost in the efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "7" }, { "text": "There are a few observations to be noted. First, by just using the POP (product of products) of the word and transition probabilities, we are able to obtain 98% precision. With better probabilities, we will definitely have better results. Second, this system can now be used to mechanically split the continuous texts like Sa\u1e43hita-P\u0101\u1e6dha of the Vedas or any other classical text to obtain the corresponding Pada-P\u0101\u1e6dha, which may be manually checked for Table 2 : A comparison of the performance of both the segmenters correctness. Third, for mere segmentation, the phase distinctions were ignored, and the obtained solutions were prioritized. As stated earlier in the previous sections, to proceed to the next stage of parsing or disambiguation, we need more than just the split words. Thus this could be a proper base for working on how the available segmented words, along with the phase details, may be used for further stages of analysis.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 459, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "In the case of internal sandhi between preverbs and verbs, the lexical knowledge of the preverb is required. And in some compounds (like those denoting a sa\u1e43j\u00f1\u0101), certain cases of retroflexion is permitted. But in this paper only the external sandhi is considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To get the list of sandhied letters, there is a list of s\u016btras or rules for the joining of letters, available in P\u0101\u1e47ini's A\u1e63\u1e6d\u0101dhy\u0101y\u012b from which one can reverse analyze and obtain the list of sandhied letters.3 For example -r\u0101m\u0101laya\u1e25 has split locations at 3 places -second, fourth (due to aka\u1e25 savar\u1e47e d\u012brgha\u1e25 in A\u1e63\u1e6d\u0101dhy\u0101y\u012b 6.1.101) and sixth-seventh (due to eco'yav\u0101y\u0101va\u1e25 in A\u1e63\u1e6d\u0101dhy\u0101y\u012b 6.1.78) letters. So, r\u0101 is one split word, as also r\u0101ma, which bypasses the split location \u0101. Similarly, we can find other split words also.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the rules governing these transitions refer the A\u1e63\u1e6d\u0101dhy\u0101y\u012b s\u016btra: atororaplut\u0101daplute (6.1.113) and ha\u015bi ca(6.1.114)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A corpus developed by the Sanskrit-Hindi Machine Translation (SHMT) Consortium under the funding from DeItY, Govt of India(2008- 12). http://sanskrit.uohyd.ac.in/scl/GOLD_DATA/ tagged_data.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If the frequency is not available for either the word or the transition, then it is assigned a default value of 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Openfst: A general and efficient weighted finitestate transducer library", "authors": [ { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Schalkwyk", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Skut", "suffix": "" }, { "first": "Mehryar", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2007, "venue": "Proceedings of International Conference on implementation and application of automata", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cyril Allauzen, Michael Riley, Johan Schalk- wyk, Wojciech Skut, and Mehryar Mohri. 2007. Openfst: A general and efficient weighted finite- state transducer library. In Proceedings of In- ternational Conference on implementation and application of automata., Prague, Czech Repub- lic.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sanskrit sandhi splitting using seq2(seq) 2", "authors": [ { "first": "Rahul", "middle": [], "last": "Aralikatte", "suffix": "" }, { "first": "Neelamadhav", "middle": [], "last": "Gantayat", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Panwar", "suffix": "" }, { "first": "Anush", "middle": [], "last": "Sankaran", "suffix": "" }, { "first": "Senthil", "middle": [], "last": "Mani", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul Aralikatte, Neelamadhav Gantayat, Naveen Panwar, Anush Sankaran, and Senthil Mani. 2018. Sanskrit sandhi splitting using seq2(seq) 2. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Tagging classical sanskrit compounds", "authors": [ { "first": "B", "middle": [ "S" ], "last": "Gillon", "suffix": "" } ], "year": 2009, "venue": "Sanskrit Computational Linguistics", "volume": "3", "issue": "", "pages": "98--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. S. Gillon. 2009. Tagging classical sanskrit com- pounds. In Sanskrit Computational Linguistics 3, pages 98-105. Springer-Verlag LNAI 5406.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Completeness analysis of a sanskrit reader", "authors": [ { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2013, "venue": "Proceedings, 5th International Sanskrit Computational Linguistics Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawan Goyal and G\u00e9rard Huet. 2013. Complete- ness analysis of a sanskrit reader. In Proceed- ings, 5th International Sanskrit Computational Linguistics Symposium.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Design and analysis of a lean interface for Sanskrit corpus annotation", "authors": [ { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2016, "venue": "Journal of Linguistic Modeling", "volume": "4", "issue": "2", "pages": "117--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawan Goyal and G\u00e9rard Huet. 2016. Design and analysis of a lean interface for Sanskrit cor- pus annotation. Journal of Linguistic Modeling, 4(2):117-126.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A distributed platform for Sanskrit processing", "authors": [ { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" }, { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Scharf", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Bunker", "suffix": "" } ], "year": 2012, "venue": "24th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawan Goyal, G\u00e9rard Huet, Amba Kulkarni, Peter Scharf, and Ralph Bunker. 2012. A distributed platform for Sanskrit processing. In 24th Inter- national Conference on Computational Linguis- tics (COLING), Mumbai.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A comparative study on thai word segmentation approaches", "authors": [ { "first": "Choochart", "middle": [], "last": "Haruechaiyasak", "suffix": "" }, { "first": "Sarawoot", "middle": [], "last": "Kongyoung", "suffix": "" }, { "first": "Matthew", "middle": [ "N" ], "last": "Dailey", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Choochart Haruechaiyasak, Sarawoot Kongyoung, , and Matthew N. Dailey. 2008. A comparative study on thai word segmentation approaches. In ECTI-CON.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sanskrittagger, a stochastic lexical and pos tagger for sanskrit", "authors": [ { "first": "Oliver", "middle": [], "last": "Hellwig", "suffix": "" } ], "year": 2009, "venue": "Lecture Notes in Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Hellwig. 2009. Sanskrittagger, a stochastic lexical and pos tagger for sanskrit. In Lecture Notes in Artificial Intelligence.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sanskrit word segmentation using character-level recurrent and convolutional neural networks", "authors": [ { "first": "Oliver", "middle": [], "last": "Hellwig", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Nehrdich", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2754--2763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver Hellwig and Sebastian Nehrdich. 2018. San- skrit word segmentation using character-level re- current and convolutional neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, page 2754-2763. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Zen computational linguistics toolkit: Lexicon structures and morphology computations using a modular functional programming language", "authors": [ { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2002, "venue": "Tutorial, Language Engineering Conference LEC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00e9rard Huet. 2002. The Zen computational lin- guistics toolkit: Lexicon structures and mor- phology computations using a modular func- tional programming language. In Tutorial, Lan- guage Engineering Conference LEC'2002.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Final version in Themes and Tasks in Old and Middle Indo-Aryan Linguistics", "authors": [ { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2003, "venue": "XIIth World Sanskrit Conference", "volume": "", "issue": "", "pages": "307--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00e9rard Huet. 2003. Lexicon-directed segmentation and tagging of sanskrit. In XIIth World San- skrit Conference, Helsinki, Finland. Final ver- sion in Themes and Tasks in Old and Middle Indo-Aryan Linguistics, Eds. Bertil Tikkanen and Heinrich Hettrich., pages 307-325, Delhi. Motilal Banarsidass.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A functional toolkit for morphological and phonological processing, application to a Sanskrit tagger", "authors": [ { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2005, "venue": "J. Functional Programming", "volume": "15", "issue": "", "pages": "573--614", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00e9rard Huet. 2005. A functional toolkit for mor- phological and phonological processing, applica- tion to a Sanskrit tagger. J. Functional Pro- gramming, 15,4:573-614.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sanskrit segmentation", "authors": [ { "first": "G\u00e9rard", "middle": [], "last": "Huet", "suffix": "" } ], "year": 2009, "venue": "South Asian Languages Analysis Roundtable", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00e9rard Huet. 2009. Sanskrit segmentation. In South Asian Languages Analysis Roundtable, Denton, Texas.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "From p\u0101\u1e47inian sandhi to finite state calculus", "authors": [ { "first": "Malcolm", "middle": [ "D" ], "last": "Hyman", "suffix": "" } ], "year": 2008, "venue": "Proceedings, 2nd International Sanskrit Computational Linguistics Symposium", "volume": "", "issue": "", "pages": "253--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Malcolm D. Hyman. 2008. From p\u0101\u1e47inian sandhi to finite state calculus. In Proceedings, 2nd In- ternational Sanskrit Computational Linguistics Symposium, pages 253-265.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Free as in free word order: An energy based model for word segmentation and morphological tagging in sanskrit", "authors": [ { "first": "Amrith", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Bishal", "middle": [], "last": "Santra", "suffix": "" }, { "first": "Prasanth", "middle": [], "last": "Sasi", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Bandaru", "suffix": "" }, { "first": "", "middle": [], "last": "Sahu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amrith Krishna, Bishal Santra, Sasi Prasanth Bandaru, Gaurav Sahu, Vishnu Dutt Sharma4, Pavankumar Satuluri, and Pawan Goyal. 2018. Free as in free word order: An energy based model for word segmentation and morphologi- cal tagging in sanskrit. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing. Association for Com- putational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Word segmentation in sanskrit using path constrained random walks", "authors": [ { "first": "Amrith", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Bishal", "middle": [], "last": "Santra", "suffix": "" }, { "first": "Pavan", "middle": [], "last": "Kumar Satuluri", "suffix": "" }, { "first": "Prasanth", "middle": [], "last": "Sasi", "suffix": "" }, { "first": "Bhumi", "middle": [], "last": "Bandaru", "suffix": "" }, { "first": "Yajuvendra", "middle": [], "last": "Faldu", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Goyal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL 2010 Student Research Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amrith Krishna, Bishal Santra, Pavan Kumar Sat- uluri, Sasi Prasanth Bandaru, Bhumi Faldu, Ya- juvendra Singh, and Pawan Goyal. 2016. Word segmentation in sanskrit using path constrained random walks. In Proceedings of the ACL 2010 Student Research Workshop.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A deterministic dependency parser with dynamic programming for Sanskrit", "authors": [ { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second International Conference on Dependency Linguistics", "volume": "", "issue": "", "pages": "157--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amba Kulkarni. 2013. A deterministic depen- dency parser with dynamic programming for Sanskrit. In Proceedings of the Second Inter- national Conference on Dependency Linguistics (DepLing 2013), pages 157-166, Prague, Czech Republic. Charles University in Prague Matfyz- press Prague Czech Republic.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Parsing Sanskrit texts: Some relation specific issues", "authors": [ { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "K", "middle": [ "V" ], "last": "Ramakrishnamacharyulu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 5th International Sanskrit Computational Linguistics Symposium. D. K. Printworld(P) Ltd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amba Kulkarni and K. V. Ramakrishna- macharyulu. 2013. Parsing Sanskrit texts: Some relation specific issues. In Proceedings of the 5th International Sanskrit Computational Linguistics Symposium. D. K. Printworld(P) Ltd.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Pavankumar Satuluri, and Devanand Shukl. 2015. How free is free word order in sanskrit", "authors": [ { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Preethi", "middle": [], "last": "Shukla", "suffix": "" } ], "year": null, "venue": "The Sanskrit Library", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amba Kulkarni, Preethi Shukla, Pavankumar Sat- uluri, and Devanand Shukl. 2015. How free is free word order in sanskrit. In The Sanskrit Li- brary.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sanskrit compound processor", "authors": [ { "first": "Anil", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Vipul", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "Amba", "middle": [], "last": "Kulkarni", "suffix": "" } ], "year": 2010, "venue": "Proceedings, 4th International Sanskrit Computational Linguistics Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anil Kumar, Vipul Mittal, and Amba Kulkarni. 2010. Sanskrit compound processor. In Proceed- ings, 4th International Sanskrit Computational Linguistics Symposium.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "State-of-the-art chinese word segmentation with bi-lstms", "authors": [ { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4902--4908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art chinese word segmentation with bi-lstms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 4902-4908. Association for Com- putational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Shallow syntax analysis in sanskrit guided by semantic nets constraints", "authors": [ { "first": "Prasenjit", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Swapan", "middle": [ "K" ], "last": "Parui", "suffix": "" } ], "year": 2006, "venue": "Proceedings of International Workshop on Research Issues in Digital Libraries", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prasenjit Majumder, Mandar Mitra, and Swa- pan K. Parui. 2006. Shallow syntax analysis in sanskrit guided by semantic nets constraints. In Proceedings of International Workshop on Re- search Issues in Digital Libraries, Kolkata. ACM Digital Library.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic sanskrit segmentizer using finite state transducers", "authors": [ { "first": "Vipul", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Student Research Workshop", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vipul Mittal. 2010. Automatic sanskrit segmen- tizer using finite state transducers. In Proceed- ings of the ACL 2010 Student Research Work- shop, pages 85-90. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "S3-statistical sam. dhi splitting", "authors": [ { "first": "Abhiram", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "301--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhiram Natarajan and Eugene Charniak. 2011. S3-statistical sam. dhi splitting. In Proceed- ings of the 5th International Joint Conference on Natural Language Processing, pages 301-308. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Building a word segmenter for sanskrit overnight", "authors": [ { "first": "Vikas", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Amrith", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Dutt", "middle": [], "last": "Vishnu", "suffix": "" }, { "first": "Prateek", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Vineeth", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" } ], "year": 2018, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas Reddy, Amrith Krishna, Vishnu Dutt Sharma, Prateek Gupta, Vineeth M. R, and Pawan Goyal. 2018. Building a word segmenter for sanskrit overnight. In CoRR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distinctive features of poetic syntax preliminary results", "authors": [ { "first": "Peter", "middle": [], "last": "Scharf", "suffix": "" }, { "first": "Anuja", "middle": [], "last": "Ajotikar", "suffix": "" }, { "first": "Sampada", "middle": [], "last": "Savardekar", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" } ], "year": 2015, "venue": "Sanskrit syntax", "volume": "", "issue": "", "pages": "305--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Scharf, Anuja Ajotikar, Sampada Savardekar, and Pawan Goyal. 2015. Dis- tinctive features of poetic syntax preliminary results. In Sanskrit syntax, pages 305-324.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Linguistic Issues in Encoding Sanskrit. Motilal Banarsidass", "authors": [ { "first": "Peter", "middle": [], "last": "Scharf", "suffix": "" }, { "first": "Malcolm", "middle": [], "last": "Hyman", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Scharf and Malcolm Hyman. 2009. Linguis- tic Issues in Encoding Sanskrit. Motilal Banar- sidass, Delhi.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Word segmentation of vietnamese texts: a comparison of approaches", "authors": [ { "first": "Dinh", "middle": [ "Q" ], "last": "Thang", "suffix": "" }, { "first": "Le", "middle": [ "H" ], "last": "Phuong", "suffix": "" }, { "first": "", "middle": [ "M" ], "last": "Nguyen T", "suffix": "" }, { "first": "Nguyen", "middle": [ "C" ], "last": "Huyen", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Vu", "middle": [ "X" ], "last": "Rossignol", "suffix": "" }, { "first": "", "middle": [], "last": "Luong", "suffix": "" } ], "year": 2008, "venue": "LREC'08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DINH Q. Thang, LE H. Phuong, NGUYEN T. M. Huyen, NGUYEN C. Tu, Mathias Rossignol, and VU X. Luong. 2008. Word segmentation of vietnamese texts: a comparison of approaches. In LREC'08, Marrakech, Morocco.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multiple character embeddings for chinese word segmentation", "authors": [ { "first": "Jingkang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianing", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou Gongshen", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "210--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingkang Wang, Jianing Zhou, and Jie Zhou Gong- shen Liu. 2019. Multiple character embeddings for chinese word segmentation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 210-216.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Chinese word segmentation as character tagging", "authors": [ { "first": "", "middle": [], "last": "N Xue", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "N Xue. 2003. Chinese word segmentation as char- acter tagging. In Computational Linguistics and Chinese Language Processing, volume 8(1), pages 29-48.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "3", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "A simplified lexical analyzer", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "The interface for choosing or rejecting the obtained split words for the example r\u0101m\u0101layo \u2032 stiFigure 3: The first solution for the sentence r\u0101m\u0101layo \u2032 sti", "uris": null } } } }