text
stringlengths
41
31.4k
<s>in Fig. 6.Fig. 5. Translation of the sentence“এই সমােজ বৃ লােকরা অচল পয়সা”Fig. 6. Translation of the sentence “এই সমােজ বৃ লােকরা অচল পয়সা”in Google translatorB. Accuracy RateWe observed that among 15000 sentences, a total of 12800sentences were correctly translated with our proposed model.The accuracy rate is the ratio of the correctly translatedsentences and the total number of sentences. Table V showsthe accuracy rate for the sentences with different lengths. Agraph of the system’s accuracy rate vs. the sentence lengthis shown in Fig. 7. From this graph, we can observe that theaccuracy rate is decreasing where the length of the sentencesare increasing.TABLE VACCURACY RATE OF DIFFERENT SENTENCES WITHDIFFERENT LENGTHSentencesLengthNo ofinputsentencesCorrectlytranslatedsentencesOverall ac-curacy (%)3 3500 3300 94.244 3250 2850 87.695 3100 2650 85.486 2750 2150 78.187 2400 1850 77.08Total 15000 12800 85.33Fig. 7. Accuracy vs. word length graphC. Comparison AnalysisComparison with paper [12] of our proposed method isgiven in Table VI. Some parameters such as application, Em-phasize, Feature, Accuracy etc. are shown in that comparison.In Table VI we can see that XML markup language used asfeature in paper [12] where in our system rule-based approachis used which is more appropriate than XML markup language.TABLE VICOMPARISON BETWEEN PAPER [12] AND OUR PROPOSEDSYSTEMPaper [12] [R. Agrawal,2018] Our proposed systemApplication MT, Sentimental Analysis MTEmphasize Indian Languages Only Bangla LanguageFeature XML markup Rule BasedAccuracy 2.69% BLEU score 85.33% for 0.015million corporaDataset(Idioms) 2208 for 7 languages 986 for Bangla languageV. CONCLUSIONAim of our paper is to translate different Bangla sentencescontaining idioms to its corresponding English sentences. Theidea was to design a proper parsing technique to parse Banglasentences with idioms. Our proposed algorithm is able todetect the idioms and translate it to its corresponding Englishmeaning. The experimental result shows, our technique givesthe accuracy of 85.33%. Our system might not get the exactparse tree for some sentences. To evaluate our implementedparsing model, we choose very simple and short Banglasentences. It is possible to design a stronger parser for Banglasentences to update CSG rules. These can be done by usingsemantic features for further research.REFERENCES[1] Wikipedia. (2019) Natural language processing. [Online]. Available:https://en.wikipedia.org/wiki/Natural language processing[2] M. Nagao, J. Tsujii, and J. Nakamura, “Machine translation fromjapanese into english,” Proceedings of the IEEE, vol. 74, no. 7, pp.993–1012, July 1986.[3] M. Graça, Y. Kim, J. Schamper, J. Geng, and H. Ney, “The rwth aachenuniversity english-german and german-english unsupervised neural ma-chine translation systems for wmt 2018,” in Proceedings of the ThirdConference on Machine Translation: Shared Task Papers, 2018, pp. 377–385.[4] M. Z. Islam, J. Tiedemann, and A. Eisele, “English to bangla phrase-based machine translation,” in Proceedings of the 14th Annual confer-ence of the European Association for Machine Translation, 2010.[5] M. G. R. Alam, M. M. Islam, and N. Islam, “A new approach to developan english to bangla machine translation system,” Daffodil InternationalUniversity Journal of Science and Technology, vol. 6, no. 1, pp. 36–42,2011.[6] K. Muntarina, M. G. Moazzam, and M. A.-A. Bhuiyan, “Tense basedenglish to bangla translation using mt system,” International Journal ofEngineering Science Invention, vol. 2, no. 10, pp. 30–38, 2013.[7] M. Rabbani, K.</s>
<s>M. R. Alam, and M. Islam, “A new verb based approachfor english to bangla machine translation,” in 2014 International Con-ference on Informatics, Electronics & Vision (ICIEV). IEEE, 2014, pp.1–6.[8] M. S. Arefin, L. Alam, S. Sharmin, and M. M. Hoque, “An empiricalframework for parsing bangla assertive, interrogative and imperative sen-tences,” in 2015 International Conference on Computer and InformationEngineering (ICCIE). IEEE, 2015, pp. 122–125.[9] T. Alamgir, M. S. Arefin, and M. M. Hoque, “An empirical machinetranslation framework for translating bangla imperative, optative andexclamatory sentences into english,” in 2016 5th International Confer-ence on Informatics, Electronics and Vision (ICIEV). IEEE, 2016, pp.932–937.[10] T. Alamgir and M. S. Arefin, “An empirical framework for parsingbangla imperative, optative and exclamatory sentences,” in 2017 In-ternational Conference on Electrical, Computer and CommunicationEngineering (ECCE). IEEE, 2017, pp. 164–169.[11] M. Haque and M. Hasan, “English to bengali machine translation:An analysis of semantically appropriate verbs,” in 2018 InternationalConference on Innovations in Science, Engineering and Technology(ICISET). IEEE, 2018, pp. 217–221.[12] R. Agrawal, V. C. Kumar, V. Muralidharan, and D. M. Sharma, “Nomore beating about the bush: A step towards idiom handling for indianlanguage nlp,” in Proceedings of the Eleventh International Conferenceon Language Resources and Evaluation (LREC 2018), 2018.View publication statsView publication statshttps://www.researchgate.net/publication/341043771</s>
<s>Paper Title (use style: paper title)Bengali to Assamese Statistical Machine Translation using Moses (Corpus Based) Nayan Jyoti Kalita, Baharul IslamDepartment of CSE, Royal School of Engineering and Technology Department of IT, Gauhati University Guwahati, India {nayan.jk.123, islambaharul65}@gmail.com Abstract—Machine dialect interpretation assumes a real part in encouraging man-machine correspondence and in addition men-men correspondence in Natural Language Processing (NLP). Machine Translation (MT) alludes to utilizing machine to change one dialect to an alternate. Statistical Machine Translation is a type of MT consisting of Language Model (LM), Translation Model (TM) and decoder. In this paper, Bengali to Assamese Statistical Machine Translation Model has been created by utilizing Moses. Other translation tools like IRSTLM for Language Model and GIZA-PP-V1.0.7 for Translation model are utilized within this framework which is accessible in Linux situations. The purpose of the LM is to encourage fluent output and the purpose of TM is to encourage similarity between input and output, the decoder increases the probability of translated text in target language. A parallel corpus of 17100 sentences in Bengali and Assamese has been utilized for preparing within this framework. Measurable MT procedures have not so far been generally investigated for Indian dialects. It might be intriguing to discover to what degree these models can help the immense continuous MT deliberations in the nation. I. INTRODUCTION Multilingualism is considered to be a part of democracy. With increasing growth of technology language barrier should not be a problem. It becomes important to provide information to people as and when needed as well as their native language. Machine translation is not primarily an area of abstract intellectual inquiry but the application of computer and language sciences to the development of system answering practical needs. The focus of the research presented here was to investigate the effectiveness of a phrase based statistical Bengali-Assamese translation using the Moses toolkit. The field of common dialect handling (NLP) started give or take five decades prior with machine interpretation frameworks. In 1946, Warren Weaver and Andrew Donald Booth examined the specialized attainability of machine interpretation "by method for the methods created throughout World War II for the breaking of adversary codes" [1]. Throughout the more than fifty years of its presence, the field has developed from the lexicon based machine interpretation frameworks of the fifties to the more versatile, powerful, and easy to use NLP situations of the nineties. Machine Interpretation Machine interpretation is the name for modernized systems that mechanize all or some piece of the procedure of making an interpretation of starting with one dialect then onto the next. In a huge multilingual public opinion like India, there is incredible interest for interpretation of records starting with one language then onto the next language. There are 22 intrinsically sanction languages, which are authoritatively utilized as a part of distinctive states. There are something like 1650 tongues talked by distinctive groups. There are 10 Indict scripts. These dialects are overall created and rich in substance. They have comparative scripts and sentence structures. The alphabetic</s>
<s>request is likewise comparable. A few dialects use regular script, particularly Devanagari. Hindi composed in the Devanagari script is the official language of the Government of India. English is likewise utilized for government notices and interchanges. India's normal writing proficiency level is 65.4 percent (Census 2001). Short of what 5 percent of individuals can either read or compose English. As the vast majority of the state government works in commonplace dialects although the focal government‟s authority reports and reports are in English or Hindi, these records are to be deciphered into the particular common dialects to have a fitting correspondence with the individuals. Work in the region of Machine Translation in India has been continuing for a few decades. Throughout the early 90s, propelled research in the field of Artificial Intelligence and Computational Linguistics made a guaranteeing advancement of interpretation innovation. This aided in the improvement of usable Machine Translation Systems in certain decently characterized spaces. Since 1990, Scrutinize on MT frameworks between Indian and outside dialects and additionally between Indian dialects are going ahead in different organizations. Interpretation between structurally comparative dialects like Hindi and Punjabi is simpler than that between dialect matches that have wide structural distinction like Hindi and English. Interpretation frameworks between nearly related dialects are less demanding to create since they have numerous parts of their linguistic uses and vocabularies in like manner [2]. The organization of the paper is as follows. Section II gives an outline on the Assamese and Bengali language. Section III describes the related work on the machine translations. Section IV gives an outline on machine translation as well as on statistical machine translation. In section V, the design and implementation of the system has been discussed. Section VI gives the results obtained from our experiment. Section VII concludes the report. mailto:islambaharul65%7D@gmail.comII. ASSAMESE AND BENGALI LANGUAGE Assamese is the main language of the state Assam and is respected as the most widely used language of the entire North-East India. It is talked by most of the locals of the state of Assam. As a first language it is talked by over 15.3 million individuals and including the individuals who talk it as a second language, what added up to 20 million. Assamese is mainly used in the North-Eastern state of Assam and in parts of the neighboring states of West Bengal, Meghalaya. Little pockets of Assamese speakers can additionally be found in Bhutan and Bangladesh. Settlers from Assam have conveyed the dialect with them to different parts of the world. Although researchers follow the historical backdrop of Assamese writing to the start of the second millennium AD, yet an unbroken record of artistic history is traceable just from the fourteenth century. The Assamese dialect developed out of Sanskrit, the antiquated dialect of the Indian sub-mainland. Notwithstanding, its vocabulary, phonology and language structure have significantly been affected by the first occupants of Assam, for example, the Bodos and the Kacharis. Bengali or Bangla is an Indo-Aryan language originated from Sanskrit. It is</s>
<s>local to the locale of eastern South Asia known as Bengali, which embodies present day Bangladesh and the Indian state of West Bengali. With almost 230 million local speakers, Bengali is a stand out amongst the most prevalently talked languages on the planet. Bengali takes after Subject-Object-Verb word structures, in spite of the fact that varieties to this subject are basic. Bengali makes utilization of postpositions, as restricted to the prepositions utilized within English and other European dialects. Determiners take after the thing, while numerals, modifiers, and owners go before the thing. Bengali has two abstract styles: one is called Sadhubhasa (exquisite dialect) and the other Chaltibhasa (current dialect) or Cholit Bangla. The previous is the customary abstract style focused around Middle Bengali of the sixteenth century, while the latter is a twentieth century creation and is displayed on the vernacular talked in the Shantipur area in West Bengal, India. III. RELATED WORKS Georgetown College in 1954 has designed the first Russian to English MT framework. After that numerous MT projects have been designed with many different qualities. Around 1970s, the centre of MT movement changed from the United States to Canada and then Europe. Then, the European Commission introduced „Systran‟ which is a French-English MT framework. Around 1980s, numerous MT frameworks showed up [3]. CMU, IBM, ISI, and Google are utilization expression based frameworks with great results. In the early 1990s, the advancement made by the requisition of factual strategies to discourse distinguishment, presented by IBM scientists, was in absolutely SMT models. Today, superb programmed interpretation came into view. The entire examination group has moved towards corpus-based methods. Machine Translation Projects in India MT is a developing examination range in NLP for Indian dialects. MT has various methods for English to Indian dialects and Indian dialects to Indian dialects. Numerous scientists, and people, are included in the advancement of MT frameworks. The primary improvements in Indian dialect MT frameworks are given below: 1. ANGLABHARTI (1991): This is a machine aided translation system for translation between English to Hindi, for Public Health Campaigns. It analyses English only once and creates an intermediate structure that is almost disambiguated. The intermediate structure is then converted to each Indian language through a process of text generation. 2. ANGLABHARTI -II (2004): This system (Sinha et al., 2003) solved the disadvantages of the previous system. In order to improve the performance of translation a different approach ie, a Generalized Example-Base (GEB) for hybridization in addition to a Raw Example-Base (REB) is used. In this system a match in REB and GEB is first attempted before invoking the rule-base. Here various sub modules are pipelined which gives more accuracy. ANGLABHARTI technology is presently under the ANGLABHARTI Mission. The main aim of this is to develop Machine Aided Translation (MAT) systems for English language to twelve different Indian regional languages like Marathi and Konkani, Assamese and Manipuri, Bangla, Urdu, Sindhi and Kashmiri, Malayalam, Punjabi, Sanskrit, Oriya. [4] 3. ANUBHARATI (1995): This system aimed at translating Hindi to</s>
<s>English. It is based on machine aided translation where template or hybrid HEBM is used. The HEBMT has the advantage of pattern and example-based approaches. It provides a generic model for translation between any two Indian languages pair [5]. 4. ANUBHARATI-II (2004): ANUBHARATI-II is a reconsidered form of the ANUBHARATI that overcomes the majority of the burdens of the prior building design with a fluctuating level of hybridization of distinctive standards. The principle expectation of this framework is to create Hindi to any other Indian dialects, with a summed up various leveled case based methodology. In any case, while both ANGLABHARTI-I and ANUBHARTI-II did not prepare the normal results, both frameworks have been actualized effectively with great results. [5] 5. MaTra (2004): Matra is a Human-Assisted interpretation framework which converts English to Indian dialects (at present Hindi). Matra is an inventive framework where the client can examine the investigation of the framework and can give disambiguation data to prepare a solitary right interpretation. Matra is a progressing project and the framework till now can work out area particular basic sentences. Improvement has been made towards coating different sorts of sentences [5]. 6. MANTRA (1999): This framework is mainly for English to Indian dialects and additionally from Indian dialects to English. The framework can protect the designing of data word reports over the interpretation. [5]. 7. A hybrid MT system for English to Bengali: This MT framework for English to Bengali was created at Jadavpur University, Kolkata, in 2004. The current adaptation of the framework works at the sentence level. [6] IV. LITERATURE REVIEW A. Machine Translation Machine Translation (MT) is the use of computers to automate the production of translations from one natural language into another, with or without human assistance. Machine translation System is used to translate the source text into target text. MT system uses the various approaches to complete the translation. Machine translation is considered as difficult task. These systems incorporate a lot of knowledge about words, and about the language (linguistic knowledge). Such knowledge is stored in one or more lexicons, and possibly other sources of linguistic knowledge, such as grammar. The lexicon is an important component of any MT system. A lexicon contains all the relevant information about words and phrases that is required for the various levels of analysis and generation. A typical lexicon entry for a word would contain the following information about the word: the part of speech, the morphological variants, the expectations of the word some kind of semantic or sense information about the word, and information about the equivalent of the word in the target language [7]. Challenges in Machine translation: 1) All the words in one language may not have equivalent words in another language. Sometimes a word in one language is expressed by a group of words in another. 2) Two given languages may have completely different structures. For example English has SVO structure while Assamese has SOV structure. 3) Words can have more than one meaning and</s>
<s>sometimes group of words or whole sentence may have more than one meaning in a language. 4) Since all the natural languages are very vast so it is almost not possible to include all the words and transfer rules in a dictionary. 5) Since both Assamese and Bengali are free-word-order languages, so sometimes the translation of a sentence may give different meaning. 6) Assamese language produces negations by putting a না or ন in front of the verb. Assamese verbs have complete sets of negative conjugations with the negative particle `na-'. Bengali doesn't have any negative conjugations. 7) Assamese definitive (the Assamese for `the': ট া (tu), জন (jan), জনী (jani), খন (khan) etc.) have no parallels in Bengali. B. Statistical Machine Translation The Statistical Machine Translation (SMT) system is based on the view that every sentence in a language has a possible translation in another language. A sentence can be translated from one language to another in many possible ways. Statistical translation approaches take the view that every sentence in the target language is a possible translation of the Fig 1: Architecture of SMT system input sentence [7]. Figure 1 gives the outline of the Statistical Machine Translation system. Language Model A language model gives the probability of a sentence computed using n-gram model. Language model can be considered as computation of the probability of single word given all of the word that precedes it in a sentence. It is decomposed into the product of conditional probability. By using chain rule this is made possible as shown in below. The probability of sentence P(S), is broken down as the probability of individual words P(W). P(S) = P (w1, w2, w3 ...wn) P(W)=P(w1)P(w2|w1)P(w3|w1w2)P(w4|w1w2w3) ...P(wn|w1 w2 ...wn1) An n-gram model simplifies the task by approximating the probability of a word given all the previous words. An n-gram of size 1 is referred to as a Unigram; size 2 is a Bigram (or, less commonly, a diagram) and so on. Translation Model This model helps to compute the conditional probability P (T|S). It is trained using the parallel corpus of target-source pairs. As no corpus is large enough to allow the computation translation model probabilities at sentence level, so the process is broken down into smaller units, e.g., words or phrases and their probabilities learnt. The translation of source sentence is thought of as being generated from source word by word. A target sentence is translated as given in figure 2. Fig 2: Translation of a sentence Possible alignment for the pair of sentences can be represented as: (আসামে একট সুন্দর জায়গা । | অসে (1) এখন (2) সুন্দৰ (3) ঠাই (4) । (5)) A number of alignments are possible. For simplicity, word by word alignment of translation model is considered. The above set of alignment is denoted as A(S, T). If length of target is m and that of source is n, then there are mn different alignments are possible and all connection for each target position are equally likely, therefore</s>
<s>order words in T and S does not affect P (T|S) and likelihood of (T|S) can be defined of conditional probability P (T, a/S) shown as P (S|T) = sum P(S, a/T). The sum is over the element of alignment set, A(S, T). Decoder This phase of SMT maximizes the probability of translated text. The words are chosen which have maximum like hood of being the translated translation. Search for sentence T is performed that maximizes P(S|T) i.e. Pr(S, T) = argmax P(T) P(S|T). Here problem is the infinite space to be searched. The use of stacked search is suggested, in which we maintain a list of partial alignment hypothesis. Search starts with null hypothesis, which means that the target sentence is obtained from a sequence of source words that we do not know. [7] V. METHODOLOGY This section includes corpus collection, data preparation, development of Language Model, Translation Model and training of decoder using Moses tool. A. Corpus Preparation Statistical Machine Translation system uses a parallel corpus of source and target language pairs. For this, we have developed a Bengali to Assamese parallel corpus with approx. 20000 sentences. This corpus consists of small sentences related to novel, story, travel, tourism in India. Table \ref{corpus} shows the number of sentences use in training, testing and tuning purposes. Corpus No of sentences Source Target Training 17000 17000 17000 Testing 1500 1500 1500 Tuning 1500 1500 1500 TABLE 1: No of sentences for training, testing and tuning. B. Language Model Training Language model are created by a language modeling toolkit. The model is created with a number of variables that can be adjusted to enable better translation. The models are building with the target language (i.e. Assamese) as it is important for it to know the language it outputs should be structured. The IRSTLM documentation gives a full explanation of the command-line option [8]. C. Training the Translation System Finally we come to our important phase - training the translation model. This will run word alignment (using GIZA++), phrase extraction and scoring, create lexicalized reordering tables and create our Moses configuration file. We have created an appropriate directory as follows, and then run the training command logs [8]: mkdir /work cd /work nohup nice /mymoses/scripts/training/train-model. perl-root-dirtraincorpus /corpusproject/ben-ass1.clean -f as-e en –alignment grow-diag-final-and –reordering msd-bidirectional-fe -lm 0:3: $HOME/lm/ben-ass1.blm.ben:8 -external-bin-dir /mymoses/tools>&training.out Once it is finished, a moses.ini file will be created in the directory /work/train/model. We can use this ini file to decode, but there are a couple of problems with it. The first is that it is very slow to load (usually in case of large corpus), but we can fix that by bin-arising the phrase table and reordering table, i.e. compiling them into a format that can be load quickly. The second problem is that the weight used by Moses to weight the different models against each other are not optimized –if we look at the moses.ini file we see that they are set to default values like 0.2, 0.3, etc. To</s>
<s>find better weights we need to tune the translation system, which leads us to the next step. D. Tuning Tuning is the slowest part of the process. We have again collected a small amount of parallel data separate from the training data [8]. We are going to tokenize and truecase it first, just as we did the training process. Now we again go back to the training directory and launched the tuning process. After the tuning process is finished, an ini file is created with train weights, which is in the directory ~/work/mert-work/moses.ini. E. Testing We can now run moses with the following command: ~/my moses/bin/moses –f \~/work/mart-work/moses.ini We can type now one Bengali sentence and get the output in Assamese. We can also echo the sentence to get the output like this: Echo “আটে টগৌহাট টিশ্বটিদ্যালময়র একজন ছাত্র“ | ~/my moses/bin/moses -f ~/work/mert-work/moses.ini This will give the output: “েই গুৱাহা ী টিশ্বটিদ্যালয়ৰ ছাত্র” We can now measure how good our translation system is. For this, we use another parallel data set. So we again tokenize and truecase it as before. The model that we have trained can then be filtered for this test set; meaning that we only retain the entries needed translate the test set. This will make the translation a lot faster. We can test the decoder by translating the test set then running the BLEU scripts on it. VI. RESULT AND ANALYSIS Table II shows some translating sentences in our system. Bengali sentence translating Assamese sentence টদ্ল্লী ভারমের রাজধানী টদ্ল্লী ভাৰেৰ ৰাজধানী আসামে একট সুন্দর জায়গা অসে এখন সুন্দৰ ঠাই ভারে একট িড় টদ্শ ভাৰে এখন ডাঙৰ টদ্শ হায়দ্রািাদ্ অন্ধ্ৰপ্ৰমদ্শ েমধয অৱটিে হায়দ্ৰািাদ্ অন্ধ্ৰপ্ৰমদ্শৰ েধযে অৱটিে উদ্য়঩ুর রাজিামন দ্টিণ অংমশ অিটিে উদ্য়঩ুৰ ৰামজিানৰ দ্টিণ অংশে অিটিে আটে টগৌহাট টিশ্বটিদ্যালময়র একজন ছাত্র েই গুৱাহা ী টিশ্বটিদ্যালয়ৰ ছাত্র TABLE II: Some translating sentences in our system For the experiments, we have chosen three sets of randomly selected sentences with 200, 250 and 300 sentences. Table III is the analysis table for the observation of set of sentences. We have graphically shown the values in Figure 3. In the wake of experiencing the results, we can say that the slips are a direct result of the emulating reasons: 1. The amount of words in our corpus is extremely constrained. 2. The PoS tagger sections are not finished. 3. Now and then, due to different word passages in the target dialect lexicon for a solitary word in the source dialect lexicon. For instance, for both the Assamese words নগৰ and চহৰ, the Bengali word is শহর Sets Total Successful Unsuccessful % of error Set 1 200 165 35 17.5 Set 2 250 211 39 15.6 Set 3 300 259 41 13.7 TABLE III: Analysis table for observation of sentences The output of the experiment was evaluated using BLEU (Bilingual Evaluation Understudy). BLEU toolkit is used to calculate the BLEU (Determine how good the translation is, the metric estimates the quality of machine translated text) [8]. We obtained a BLEU score</s>
<s>of 16.3 from the parallel corpus (Bengali-Assamese) after translation. This is very small and may be because we have used a very small data set. BLEU score are not commensurate even between different corpora in the same translation direction. BLEU is really only comparable for different system or system variant on the exact same data. In the case of same corpus in two directions, an imperfect analogy might be gas mileage between two different cities. No matter how consistently you drove, you would not Fig 3: Graphical analysis for observation of sentences expect the same gasoline usage driving from city C to D as in the other direction, (especially if one direction is more uphill than the other). VII. CONCLUSIONS AND FUTURE WORK In this paper, Bengali to Assamese SMT system has been developed. Our method of extracting translation pattern is a relatively simple one and has been proposed by using phrase-based decoder in Moses. We extracted the results of BLEU score. We will try to develop the translation system by our own instead of using Moses MT system. We will try to increase the corpus for better training for better efficiency. The system can also be put in the web-based portal to translate content of one web page in Bengali to Assamese. We try to get more corpuses from different domains in such a way that it will cover all the wording. Since BLEU is not good we need some evaluation techniques also. We should try the incorporation of shallow syntactic information (POS tags) in our discriminative model to boost the performance of translation. REFERENCES [1] A. D. Booth and W. N. Locke, “Historical introduction,” Machine Translation of Languages, pp. 1–14, 1955. [2] S. Dave, J. Parikh, and P. Bhattacharyya, “Interlingua-based english–hindi machine translation and language divergence,” Machine Translation, vol. 16, no. 4, pp. 251–304, 2001. [3] S. Borgohain and S. B. Nair, “Towards a pictorially grounded language for machine-aided translation.” Int. J. of Asian Lang. Proc., vol. 20, no. 3, pp. 87–110, 2010. [4] K. Vijayanand, S. I. Choudhury, and P. Ratna, “Vaasaanubaada: automatic machine translation of bilingual bengali-assamese news texts,” in Language Engineering Conference, 2002. Proceedings. IEEE, 2002, pp. 183–188. [5] P. Antony, “Machine translation approaches and survey for indian languages,” Computational Linguistics and Chinese Language Processing Vol, vol. 18, pp. 47–78, 2013. [6] M. S. Islam, “Research on bangla language processing in bangladesh progress and challenges,” in 8th International Language and Development Conference, 2009, pp. 23–25, 527–533. [7] D. D. Rao, “Machine translation,” Resonance, vol. 3, no. 7, pp. 61–70, 1998. [8] P. Koehn, “Moses, statistical machine translation system, user manual and code guide,” 2010.</s>
<s>BENGALI INFORMATION RETRIEVAL SYSTEM (BIRS)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/337440845BENGALI INFORMATION RETRIEVAL SYSTEM (BIRS)Research · October 2019DOI: 10.5121/ijnlc.2019.8501CITATIONREADS3 authors, including:Some of the authors of this publication are also working on these related projects:Artificial Doctors: Treatment for type-2 diabetes patients View projectheavy metal View projectMd. KowsherHishab ltd39 PUBLICATIONS 31 CITATIONS SEE PROFILEAll content following this page was uploaded by Md. Kowsher on 22 November 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/337440845_BENGALI_INFORMATION_RETRIEVAL_SYSTEM_BIRS?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/337440845_BENGALI_INFORMATION_RETRIEVAL_SYSTEM_BIRS?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Artificial-Doctors-Treatment-for-type-2-diabetes-patients?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/heavy-metal-15?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Kowsher?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Kowsher?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Kowsher?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Kowsher?enrichId=rgreq-ee02270797935a2f3369f19f96a9dd5b-XXX&enrichSource=Y292ZXJQYWdlOzMzNzQ0MDg0NTtBUzo4Mjc5OTQzMTg4NTYxOTNAMTU3NDQyMDYzNzMwMQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 DOI: 10.5121/ijnlc.2019.8501 1 BENGALI INFORMATION RETRIEVAL SYSTEM (BIRS) Md. Kowsher1, Imran Hossen2 and SkShohorab Ahmed2 1Department of applied mathematics, Noakhali Science and Technology University, Noakhali-3814, Bangladesh ga.kowsher@gmail.com 2Department of Information and Communication Engineering, University of Rajshai, Rajshai-6205, Bangladesh imranhsobuj97@gmail.com shohorab.ahmed.it@gmail.com ABSTRACT Information Retrieval System is an effective process that helps a user to trace relevant information by Natural Language Processing (NLP). In this research paper, we have presented present an algorithmic Information Retrieval System(BIRS) based on information and the system is significant mathematically and statistically. This paper is demonstrated by two algorithms for finding out the lemmatization of Bengali words such as Trie and Dictionary Based Search by Removing Affix (DBSRA) as well as compared with Edit Distance for the exact lemmatization. We have presented the Bengali Anaphora resolution system using the Hobbs’ algorithm to get the correct expression of information. As the actions of questions answering algorithms, the TF-IDF and Cosine Similarity are developed to find out the accurate answer from the documents. In this study, we have introduced a Bengali Language Toolkit (BLTK) and Bengali Language Expression (BRE) that make the easiest implication of our task. We have also developed Bengali root word’s corpus, synonym word’s corpus, stop word’s corpus and gathered 672 articles from the popular Bengali newspapers ‘The Daily Prothom Alo’ which is our inserted information. For testing this system, we have created 19335 questions from the introduced information and got 97.22% accurate answer. KEYWORDS Bangla language Processing, Information retrieval, Corpus, Mathematics, and Statistics. 1. INTRODUCTION Information Retrieval (IR) simply refers to retrieve information from a collection of sources based on relevant query. It is a science for searching information in a document, searching documents themselves and searching of text, images and sounds . Searching is mainly either based on full-text or content-based. By default, IR means text retrieval because of historical reasons [1]. Recommender System that is intimately related to the Information Retrieval System work without a query. Automated Information Retrieval are most likely used to reduce information overload. It is basically a software environment that gives access to documents like books, journals, magazines and so on. Documents can be stored and managed using this system. And today, web search engines are the most visible IR application. mailto:ga.kowsher@gmail.commailto:imranhsobuj97@gmail.commailto:shohorab.ahmed.it@gmail.comInternational Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 Everyday, a huge number of information are produced by newspapers, social networking sites and different kinds of websites. Due to these large collections of digital documents in the web or</s>
<s>local machine, finding the desired information is a tedious process. Finding relevant information based on query, has some challenges such as word mismatch that is a sentence can be made in different ways, their meaning is same but structure is different and a question can be formulated in different ways utilizing synonymuos words. So it is very challenging and difficult task to retrieve the desired information. The Boolean Retrieval Model was used in the first search engines and is still in using today [2]. Documents will be retrieved if they correspond exactly to the query terms but this does not generate the ranking since this model assumes all the documents in the retrieved set to be relevant. However, it is due to the lack of the appropriate ranking algorithm that is the major drawback of this approach. Therefore, the Vector Space Model [3] was suggested. The word weighting and similarity can be defined in this space between papers. Terms are usually weighted by frequency in the document (TF or Term Frequency) or normalized with respect to the number of documents where they appear (IDF, or Inverse Document Frequency). Unlike the previously mentioned retrieval models, probabilistic models provide some levels of theoretical assurances that the models will perform well as long as the model assumptions are consistent with the data observed. The Principle of Probability Ranking or PRP[4] states that ranking documents by the probability of relevance will maximize precision at any given rank — assuming a document's relevance to a query is independent of other documents. One of the first techniques was suggested in[5] to calculate the probability of relevance using document terms. Other methods assess text statistics in papers and queries and then use them to construct the term weighting function [6]. BM25 ranking algorithm works well in different tasks [7]. More advanced methods such as the Relevance-Based Language Models (or Relevance Models for short, RM) are the best-performing text retrieval ranking techniques [8 ]. So we are mainly interested to deal the problems using TF- IDF and cosine similarity. TF was used to calculate the importance of every word in a sentence. IDF calculated the actual importance of the words in a document. Then we used cosine similarity to figure out the relationship between questions and sentences. Our main objective is to retrieve relevant information within a short time with great accuracy. 2. RELATED WORK In foreign language for instance English language, there are many distinct fields such as Web information retrieval [9], the retrieval of the picture and video [10] or the content-based recommendation [11], text retrieval methods that have been commonly studied. However, in Bangla language, the study of information retrieval isn’t satisfactory. A joint correlation technique [12] is used to extract and recognize Bangla numbers from the paper picture. Bangla OCR by the Center for Research Bangla Language Processing (CRBLP) which converts written text or pictures into editable Unicode text [13], has weaknesses as it is for a restricted scope. However, greater efficiency has been</s>
<s>accomplished by the OCR scheme for Bangla[14] and the web handwritten OCR for Bangla [15]. Much research is performed previously on summarization, such as extracting Bangla sentences for document summarization [16]. In addition, knowledge extraction from Bangla Blogs and News [17], Bangla text extraction from real images [18], Bangla sentiment analysis from micro-blogs, media channel and customer feedback portal [19] is widely studied. International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 Unlike these works, we present an information recovery system on Bangla language by the help of mathematics and statistics. 3. PROPOSED WORK In this paper, we introduced a Bengali Information Retrieval Systems (BIRS) based on Bengali Natural Language Processing (BNLP). The procedure we adopted is isolated in three parts such as collecting informative documents , pre-processing data and finding relationships between informative documents and questions via the boost of TF-IDF model and cosine similarity. Firstly, we collectd five types of corpus such as Bengali root words, stop words, our informative documents, questions, synonym words . Secondly, we pre-processed our desired data. Finally, Cosine similarity was used to obtain the relationship between the questions and the answers. However, cosine similarity deals with vectors. In this case, the documents and questions were converted to vectors using the TF-IDF model. For better explanation, we take a bunch of information as paragraph along with two questions so that a workflow of information retrieval can be explained. Fig.1: Proposed Work. 3.1. Category: Information <p>বঙ্গবনু্ধ শেখ মুজিবুর রহমান ১৯২০ সালের ১৭ মার্চ টুঙ্গঙ্গপড়া গ্রালম িন্ম গ্রহণ কলরন। তার রািননঙ্গতক িীবন শুরু হল়েঙ্গিে ১৯৩৯ সালে ঙ্গমেনাঙ্গর সু্কলে পড়ার সম়ে শেলকই। ভারত পাঙ্গকস্থান ঙ্গবভক্ত হ়োর পর পূব চ পাঙ্গকস্থালনর উপর পজিম পাঙ্গকস্থালনর অনযা়ে অঙ্গবর্ার বাড়লত োলক। এিনয ঙ্গতঙ্গন ১৯৬৬ সালের ৫ শেবররু়োঙ্গর োলহালর ঙ্গবলরাধী দেসমূলহর একটট িাতী়ে সলেেলন ঐঙ্গতহাঙ্গসক ি়ে দো দাবী শপে কলরন যা ঙ্গিে কায চত পূব চ পাঙ্গকস্তালনর স্বা়েত্তোসলনর পঙ্গরপূণ চ রূপলরখা। অবলেলে ঙ্গতঙ্গন ১৯৭১ সালের ২৬ মার্চ বাাংোলদলের স্বাধীনতার শ ােণা শদন।</p> International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 Category: Questions Question-1: বঙ্গবনু্ধ শেখ মুজিবুর রহমান শকাো়ে িন্ম গ্রহণ কলরন? Question-2: কত তাঙ্গরলখ ঙ্গতঙ্গন ি়ে দো দাবী শপে কলরঙ্গিে? 3.2. Corpus For the implication of Bengali Informative Retrieval System (BIRS), we mainly described five types of corpus. In the first corpus, there were 28,324 Bengali root words. The purpose of this corpus was to lemmatize Bengali words. The second one that contained 382 Bengali stop words was used to remove unnecessary information from documents e.g. stop words from the informative documents and questions. We compiled 672 articles as informative documents from variety field of interest such as politics, entertainment, sports, education, science, and technology from the popular Bengali newspapers named ‘The Daily ProthomAlo’ that was our third corpus as informative documents. In this work, as our fourth corpus, we created 19334 questions from our informative documents. Furthermore, for the sake of synonymous words processing, we included 18454 more Bengali similar words and collectively that was our fifth copus. However, there were also some other corpus e.g.</s>
<s>verb processing, unknown word processing and removing punctuations and so on. 3.3. Pre-Processing We need to pre-process or normalize the informative documents and the questions through cleaning, verb processing, removing stop words, tokenization, lemmatization, and synonyms words processing . Basically, the unwanted or special characters and stop-words don’t affect on any linguistic operations. Besides, the stopwords of Bengali language always remains as root .So pre-processing these are also a good way for reducing execution time. 3.3.1. Anaphora In linguistics, anaphora is the use of an expression whose interpretation depends upon another expression in context. In narrower sense, it refers to replace the previously described words (noun) with other words (pronoun) for further use in context. It requires a successful identification and resolution of Natural language processing(NLP). And hence presents challenges in computational linguistics. In the proposed Bengali Informative Retrieval system (BIRS), we mentioned a review of work done in the field of anaphora resolution which has an influence on pronouns, mainly personal pronoun. The Hobbs’ algorithm of Anaphora resolution was used in our proposed work. The algorithm has been adapted for Bengali language taking into account the roles of subject, object and its impact on anaphora resolution for reflexive and possessive pronouns. Here is the two sentences from our Bengali informative documents: বঙ্গবনু্ধ শেখ মুজিবুর রহমান ১৯২০ সালের ১৭ মার্চ টুঙ্গঙ্গপড়া গ্রালম িন্ম গ্রহণ কলরন। তার রািননঙ্গতক িীবন শুরু হল়েঙ্গিে ১৯৩৯ সালে ঙ্গমেনাঙ্গর সু্কলে পড়ার সম়ে শেলকই। Here, ‘তাাঁর’ (his) is the pronoun of ‘বঙ্গবনু্ধ শেখ মুজিবরু রহমান’ (noun). International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 3.3.2. Tokenization Tokenization is an NLP action. It separates a stream of text into words, sentences, symbols or phrases or other elements which are meaningful. We used the sentence tokenization by using the BLTK tool and we mapped every tokenized sentence to its root sentence. 3.3.3. Cleaning Cleaning word refers to remove an unwanted character which doesn’t sentiment to an informative documents. For example, colon, semicolon, comma, question mark, exclamation sign, and other punctuations don’t provide meaningful connation. We used the Bangla Regular Expression (BRE) tool to shift the unwanted characters from the informative documents and the questions. The Bengali punctuation corpus was used to deal with BRE and carry away punctuation as unwanted data. 3.3.4. Stop Words Removing Stop words refer the words that does not affect to the overall meaning of the sentences in the documents. For instance, in Bengali language stop words such as এবাং (and), শকাো়ে (where),অেবা (or), শত (to), সালে (with) are meaningless making the overall meaning of the sentences. As our proposed system i.e. BIRS is an algorithmic approach based on data, the stop words need to be dismissed. In our documents , every sentence was checked whether there was stopwords or not. If there was anstopword, the word was deleted. For the simplification of this action, we used Bengali Language Toolkit (BLTK). 3.3.5. Lemmatization for Bangla Language Lemmatization is a significant process to transfer a word to its root word. It is one</s>
<s>kind of Natural Language Processing technique and effective for use in various purposes such as text mining, answering questions or Chatbot etc.In Bengali natural language processing, there are few verbs that cannot be lemmatized by any system because of the limitation of lemmatization algorithms. For example, শেলে (went) and ঙ্গেল়ে ( going) are generated from the root word যাও়ো ( go). There is no relation of character between শেলে( went) and যাও়ো ( go). So processing these words with algorithms are not good choice. That is why, these types of verbs are mapped to their root verbs for easily accessing. In our proposed BIRS , we used two novel techniques for lemmatization one is based on data structure and the other is the hash table of computer programming. The novel techniques are “DBSRA” and “Trie”. Dictionary Based Search by Removing Affix (DBSRA) which is really an easy concept and more suitable for the lemmatization of Bengali words with the lowest time and space complexity. In this method, at first it removes the ith character from any data (which we want to shift into root word) where i =0, 1, 2,.... (Length of word - 1) and delete last jth character where j = length of word n, n-1, n-2, ...., 1 .After that, this method will look up in our corpus whether the intended words are in the root words corpus or not. “Trie” is a tree-based data structure which is exoteric for storing data. Like DBSRA, Trie also requires lowest space and time complexity. A Trie depot the inserted information’s and question’s words with the similar prefix under the identical sequence of edges in the tree eliminating that essential for storing the identical prefix each time for each word. In BIRS, the Trie is taken into account as lemmatization procedure which accommodates for retrieving all possible lemmas where every node contains a single character and every two nodes are connected with a single edge. In our system, we used ‘Trie’ as an additional process International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 for the best lemma as sometimes, the Trie approach does not work properly if the input word contains a prefix. So eliminating the prefix is mandatory for proper lemmatization. Fig.2. Bangla Lemmatization Process For the best accuracy, the whole informative documents and questions words are processed with both lemmatization techniques. accurate lemma can be found out with the help of ‘Edit distance’. Edit distance is egregious to count the similarity or dissimilarity between two words using dynamic programming. In the lemmatization process, we applied “Edit Distance” for the root word comparison of ‘DBSRA’ and ‘Trie’. In the applications of Natural Language Processing, Edit Distance is massively applied for the spelling correction of a word. Sometimes, the lemmatization algorithms are not a good choice for the unknown words. Here, unknown words refer to the name of a place, a person or name of anything. ‘Edit Distance’ assists to determine which word is known or unknown.</s>
<s>Edit Distance can count the probability with the edit to its word (not lemma). If the probability P(edit|word) is greater than 50% (P(edit|word) > 50%) then it is counted as unknown words. International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 In order to process the unknown words, we built a corpus, the suffix of Bengali Language that included শত (te), শি(che), শ়ের(yer), etc. The longest common suffix was removed from the last character of an unknown word obtained the lemma or root of an unknown word. 3.3.6. Synonyms Words Processing Synonyms words refers same meaning with different words. There is a possibility when users make questions which are not available in information data but the meaning is same. In thi.s case, BIRS may be failed to answer the questions correctly. And hence the overall performance will be degraded if this type of event happens for any user. So we had given much importance in case of synonyms words processing in the BIQAS and also Natural Language Understanding. Therefore, to handle this unwanted situation, a synonyms word corpus were constructed containing total 13,189 words. Every word was mapped to a common identical word in the informative documents and th questions. In our desired system, if a word was not synonymous word with respect to the synonym corpus word, we didn’t count it as a similar word. After preprocessing the sentences and questions look the following: Sentence-1: <s>বঙ্গবনু্ধ শেখ মুজিবুর রহমান ১৯২০ সাে ১৭ মার্চ টুঙ্গঙ্গপড়া গ্রাম িন্ম গ্রহণ করা</s> Sentence-2: <s>বঙ্গবনু্ধ শেখ মুজিবুর রহমান রািনীঙ্গত িীবন শুরু হ়ে ১৯৩৯ সাে ঙ্গমেনাঙ্গর সু্কে পড়া সম়ে োকা</s> Sentence-3:<s> ভারত পাঙ্গকস্থান ঙ্গবভক্ত হ়ে পূব চ পাঙ্গকস্থান পজিম পাঙ্গকস্থান অনযা়ে অঙ্গবর্ার বাড়া োকা</s> Sentecnce-4: <s>বঙ্গবনু্ধ শেখ মুজিবুর রহমান ১৯৬৬ সাে ৫ শেবররু়োঙ্গর োলহার ঙ্গবলরাধ দে িাতী সলেেন ঐঙ্গতহাঙ্গসক ি়ে দো দাবী শপে করা োকা কায চ পূব চ পাঙ্গকস্তান স্বা়েত্তোসন পূণ চ রূপলরখা</s> Sentence-5: <s>অবলেলে বঙ্গবনু্ধ শেখ মুজিবুর রহমান ১৯৭১ সাে ২৬ মার্চ বাাংোলদে স্বাধীন শ ােণা শদও়ো</s> Question-1: বঙ্গবনু্ধ শেখ মুজিবুর রহমান িন্ম গ্রহণ করা Question-2: তাঙ্গরখ বঙ্গবনু্ধ শেখ মুজিবুর রহমান ি়ে দো দাবী শপে করা 4. ALGORITHMS 4.1. TF-IDF TF-IDF is an abbreviation for the Term Frequency-Inverse Document Frequency. Finding the importance of a word to a sentence is a numerical technique and is mathematically-statistically important. In our proposed system of the TF-IDF model, there are several steps to figure out the TF-IDF of an inserted sentence. Actually, TF calculates the frequency of a term in a document. It refers how many times a term occurs in a document. Firstly, the term rule has been ensured to measure the value of TF in every pre-processed sentence. For the standard data, normalization has been introduced due to the variation of the length of the document. In small document, there might be less word terms than the large International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 documents. Therefore there would be wrong in the retrieved information based on the query. And hence we introduced the normalization technique as a</s>
<s>term (word÷length) of the sentence. Secondly, to figure out a relevant sentence by searching questions, IDF is pretty useful in this case. In TF all the words are treated as equal importance. But IDF determines the actual importance of a word in the document. Finally, we counted the desired TF-IDF by multiplication with the term TF and IDF from the inserted questions and sentences. In order to reduce the time and the space complexity, we calculated the TF-IDF only of those words which are related to input questions inquired by the users. Here, the TF-IDF from the previous example is shown in Table 1. Table 1.TF-IDF of Sentences and Questions Terms Sent-1 Sent-2 Sent-3 Sent-4 Sent-5 Ques-1 Ques-2 টুঙ্গঙ্গপড়া 0.053767 0 0 0 0 0 0 রূপলরখা 0 0 0 0.026883 0 0 0 করা 0.007455 0 0 0.015305 0 0.056849 0.036176 রািনীঙ্গত 0 0.046598 0 0 0 0 0 স্বা়েত্তোসন 0 0 0 0.026883 0 0 0 স্বাধীন 0 0 0 0 0.053767 0 0 ১৯৩৯ 0 0.046598 0 0 0 0 0 ঐঙ্গতহাঙ্গসক 0 0 0 0.026883 0 0 0 সাে 0.030611 0.006461 0 0.003727 0.007455 0 0 ৫ 0 0 0 0.026883 0 0 0 োকা 0 0.01479 0.018487 0.008533 0 0 0 অবলেলে 0 0 0 0 0.053767 0 0 পূব চ 0 0 0.033162 0.015305 0 0 0 ঙ্গমেনাঙ্গর 0 0.046598 0 0 0 0 0 পজিম 0 0 0.058248 0 0 0 0 শ ােণা 0 0 0 0 0.053767 0 0 ১৯২০ 0.007455 0 0 0 0 0 0 বাাংোলদে 0 0 0 0 0.053767 0 0 বঙ্গবনু্ধ 0 0.006461 0 0.003727 0.007455 0.013844 0.00881 শেবররু়োঙ্গর 0 0 0 0.026883 0 0 0 মার্চ 0.053767 0 0 0 0.030611 0 0 হ়ে 0 0 0.058248 0 0 0 0 শেখ 0.030611 0.006461 0 0.003727 0.007455 0.013844 0.00881 সলেেন 0 0 0 0.026883 0 0 0 পাঙ্গকস্থান 0 0 0.174743 0 0 0 0 শুরু 0 0.046598 0 0 0 0 0 দো 0 0 0 0.026883 0 0 0.063543 ১৭ 0.007455 0 0 0 0 0 0 বাড় 0 0 0.058248 0 0 0 0 দাবী 0 0 0 0.026883 0 0 0.063543 পূণ চ 0 0 0 0.026883 0 0 0 োলহার 0 0 0 0.026883 0 0 0 িন্ম 0.053767 0 0 0 0 0.099853 0 International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 পড়া 0 0.046598 0 0 0 0 0 ি়ে 0 0 0 0.026883 0 0 0.063543 মুজিবরু 0.007455 0.006461 0 0.003727 0.007455 0.013844 0.00881 ঙ্গবভক্ত 0 0 0.058248 0 0 0 0 অনযা়ে 0 0 0.058248 0 0 0 0 পাঙ্গকস্তান 0 0 0 0.026883 0 0 0 িাতী 0 0 0 0.026883 0 0 0 সম়ে 0 0.046598 0 0 0 0 0 গ্রাম 0.007455 0 0 0 0 0 0 শদও়ো 0 0 0 0 0.053767 0 0 দে 0 0 0 0.026883 0 0 0 অঙ্গবর্ার 0 0 0.058248 0 0 0 0 গ্রহণ 0.053767 0 0 0 0 0.099853 0</s>
<s>সু্কে 0 0.046598 0 0 0 0 0 ঙ্গবলরাধ 0 0 0 0.026883 0 0 0 ১৯৬৬ 0 0 0 0.026883 0 0 0 িীবন 0 0.046598 0 0 0 0 0 ভারত 0 0 0.058248 0 0 0 0 শপে 0 0 0 0.026883 0 0 0.063543 হ়ে 0 0.046598 0 0 0 0 0 কায চ 0 0 0 0.026883 0 0 0 ১৯৭১ 0 0 0 0 0.053767 0 0 ২৬ 0 0 0 0 0.053767 0 0 রহমান 0.053767 0.006461 0 0.003727 0.007455 0.013844 0.00881 4.2. Cosine Similarity Cosine similarity is applied to obtain the relationship between questions. The cosine similarity is a measurement between two vectors that counts the cosine angle between them. The angle is the judgment of orientation but not magnitude that can be identified by comparing between vectors on a normalized space. The cosine of two non-zero vectors can be derived as: With the help of the TF-IDF table, we calculated the cosine similarly which is shown in the table 2. International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 Table 2. Cosine Similarly Relations Sentence-1 Sentence-2 Sentence-3 Sentence-4 Sentence-5 Question-1 0.597 0.016 0.0 0.060 0.018 Question-2 0.074 0.012 0.0 0.483 0.013 The Cosine Similarity (Q1, S1) is greater than the Cosine Similarity (Q1, S2) (0.597>0.016). So it states, the percentage of question1's answer is 59.7% related to sentence-1 and 1.6% related to sentence-2. In the same way, the cosine similarity can be ordered from the table such as (Q1, S1) > (Q1, S4) > (Q1, S5) > ( Q1, S2) > (Q1, S3). Similarly, the Cosine Similarity (Q2, S1) is less than the Cosine Similarity (Q2, S4) i.e. (0.074 < 0.483). So, the percentage of question2's answer is 7.4% related to sentence1 and 48.3% related to sentence-4. So the cosine similarity can be ordered from the table like (Q2, S4) > (Q2, S1) > (Q2, S5) > (Q2, S2) >( Q2, S3). Threfore, the answer of question-1 stays in sentence-1 and the answer of question-2 remains in sentence-4. In this way, we can find out the relevant answer from the corpus. 5. Experiments 5.1. Experimental Tools The whole work was performed in Anaconda distribution and Python 3.6 tools. For the sake of clearing data, removing stop words, we have constructed a tool for Bengali language processing mentioned as Bengali language Toolkit (BLTK) and we have applied NLTK system in many pre-processing tasks. 5.2. Final Result To test our proposed Bengali Information Retrieval System, we collected 672 articles from the popular Bengali newspapers ‘The Daily ProthomAlo’ as our input documents. We created 19334 questions for testing data and obtained 97.22 % accuracy of our BIRS. From 19334 questions, the number of correct answers was 18797 and incorrect was 537. The performance to retrieve information is pretty good and flexible with the lowest time complexity. 6. CONCLUSION AND FUTURE WORK In this paper, we have presented a Bengali Information Retrieval System. To establish our proposed system we have used various types</s>
<s>of algorithms and methods such as lemmatization, anaphora resolution procedure, TF-IDF and Cosine Similarity. The whole actions have been processed with Bengali Language as part of BNLP. We have tested our proposed BIRS, noted the accuracy, compared the correct and incorrect results. In future, we have a plan to improve the system for educational purposes, industry, business and personal tasks. And we want to use deep learning algorithms such as neural network for the development of the system. International Journal on Natural Language Computing (IJNLC) Vol.8, No.5, October 2019 REFERENCES [1] Singhal, A. (2001). “Modern information retrieval: A brief overview.”, IEEE Data EngineeringBulletin 24(4), 35–43. [2] Croft, W.B., Metzler, D. &Strohman, T. (2009). “ Search engines-information retrieval in practice.”,Pearson education. http://www.search-engines-book.com/. [3] Salton, G., Wong, A., & Yang, C. S. (1975). “A vector space model for automatic indexing.” Communications of the ACM 18(11), 613–620. http://dx.doi.org/10.1145/361219.361220. [4] Robertson &S.E. (1997) “Readings in information retrieval”,The probability ranking principle in IR (pp. 281–286). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://dl. acm.org/citation.cfm?id=275537.275701. [5] Robertson, S. E., & Jones, K. S. (1988) “Relevance weighting of search terms” (pp. 143–160). London, UK: Taylor Graham Publishing. [6] Amati, G., & Van Rijsbergen, C. J. (2002). “Probabilistic models of information retrieval based on measuring the divergence from randomness.” ACM Transactions on Information Systems 20(4), 357–389. [7] Robertson, S. (2010). “The probabilistic relevance framework: BM25 and Beyond.” Foundations and Trends in Information Retrieval 3(4), 333–389. [8] Lavrenko, V., & Croft, W. B. (2001) “Relevance-based language models.” In W. B. Croft, D. J. Harper, D.H.Kraft, &J.Zobel (eds.) SIGIR2001:Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval, New Orleans, Louisiana, USA(pp.120–127). ACM.https://doi.org/10.1145/383952. 383972. [9]. Agichtein, E., Brill, E., &Dumais, S. (2006) “Improving web search ranking by incorporating user behavior information.” , Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR 2006 (pp. 19–26). New York, NY, USA: ACM. https://doi.org/10.1145/1148170.1148177. [10] Sivic, J., &Zisserman, A. (2003) “Videogoogle: A text retrieval approach to object matching in videos.” ,Proceedings of the ninth IEEE international conference on computer vision, ICCV 2003 (Vol. 2, pp. 1470–1477). Washington, DC, USA: IEEE Computer Society. http://dl.acm. org/citation.cfm?id=946247.946751. [11] Xu, S., Bao, S., Fei, B., Su, Z., & Yu, Y. (2008). “Exploring folksonomy for personalized search.” , Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR 2008 (pp. 155–162). New York, NY, USA: ACM [12] M. K. I.Molla, & K. M.Talukder, ( 2007) “Bangla number extraction and recognition from the document image” , International Conference. on Computer and Information Technology, ICCIT 2007, pp. 512-517. [13] M. S. Islam, (2009) “Research on Bangla Language Processing in Bangladesh: Progress and Challenges”, International Conference on Language & Development pp. 23-25. [14] M.A. Hasnat, S.M. Habib, & M. Khan (2008) “A high-performance domain specific OCR for Bangla script”, Novel Algorithms and Techniques In Telecommunications, Automation and Industrial Electronics pp. 174-178, Springer, Dordrecht International Journal on Natural Language Computing (IJNLC)</s>
<s>Vol.8, No.5, October 2019 [15] G. Fink, S. Vajda, U. Bhattacharya, S. K. Parui& B. B. Chaudhuri, (2010). “ Online Bangla word recognition using sub-stroke level features and hidden Markov models” International Conference. on Frontiers in Handwriting Recognition, ICFHR 2010, pp. 393-398. [16] K .Sarkar, (2012) “Bengali text summarization by sentence extraction”, arXiv preprint arXiv:1201.224. [17] A. Das & S. Bandyopadhyay, (2010).“Phrase-level Polarity Identification for Bengali” International Journal of Computational Linguistics and Applications, IJCLA, 1(1-2), pp. 169-182. [18] U. Bhattacharya, S. K. Parui, & S. Mondal, (2009) “Devanagari and Bangla Text Extraction from Natural Scene Images”, International Conference on Document Analysis and Recognition, pp. 171-175. [19] A. Hassan, M.R. Amin, N. Mohammed, & A.K.A. Azad, (2016). “Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models”, arXiv preprint arXiv:1610.00369 View publication statsView publication statshttps://www.researchgate.net/publication/337440845 Abstract Keywords Bangla language Processing, Information retrieval, Corpus, Mathematics, and Statistics.</s>
<s>Proceedings of the...D S Sharma, R Sangal and A K Singh. Proc. of the 13th Intl. Conference on Natural Language Processing, pages 99–108,Varanasi, India. December 2016. c©2016 NLP Association of India (NLPAI)Cross-lingual transfer parsing from Hindi to Bengali usingdelexicalization and chunkingAyan Das, Agnivo Saha, Sudeshna SarkarDepartment of Computer Science and EngineeringIndian Institute of Technology, Kharagpur, WB, Indiaayan.das@cse.iitkgp.ernet.inagnivo.saha@gmail.comsudeshna@cse.iitkgp.ernet.inAbstractWhile statistical methods have been veryeffective in developing NLP tools, theuse of linguistic tools and understand-ing of language structure can make thesetools better. Cross-lingual parser con-struction has been used to develop parsersfor languages with no annotated treebank.Delexicalized parsers that use only POStags can be transferred to a new target lan-guage. But the success of a delexical-ized transfer parser depends on the syntac-tic closeness between the source and tar-get languages. The understanding of thelinguistic similarities and differences be-tween the languages can be used to im-prove the parser. In this paper, we usea method based on cross-lingual modeltransfer to transfer a Hindi parser to Ben-gali. The technique does not need anyparallel corpora but makes use of chun-kers of these languages. We observe thatwhile the two languages share broad sim-ilarities, Bengali and Hindi phrases do nothave identical construction. We can im-prove the transfer based parser if the parseris transferred at the chunk level. Based onthis we present a method to use chunkersto develop a cross-lingual parser for Ben-gali which results in an improvement ofunlabelled attachment score (UAS) from65.1 (baseline parser) to 78.2.1 IntroductionParsers have a very important role in various nat-ural language processing tasks. Machine learningbased methods are most commonly used for learn-ing parsers for a language given annotated parsetrees which are called treebanks. But treebanks arenot available for all languages, or only small tree-banks may be available. In recent years, consider-able efforts have been put to develop dependencyparsers for low-resource languages. In the absenceof treebank for a language, there has been researchin using cross-lingual parsing methods (McDon-ald et al., 2011) where a treebank from a relatedsource language (SL), is used to develop a parserfor a target language (TL). In such work, an an-notated treebank in SL and other resources in areused to develop a parser model for TL. Most ofthe existing work assume that although annotatedtreebanks are not available for the target languageTL, there are other resources available such as par-allel corpus between the source and the target lan-guages (Xiao and Guo, 2015; Rasooli and Collins,2015; Tiedemann, 2015). However, developing aparallel corpus is also expensive if such parallelcorpus is not available.In this work, our goal is to look at methods fordeveloping a cross-lingual transfer parser for re-source poor Indian language for which we haveaccess to a small or no treebank. We assume theavailability of a monolingual corpus in target lan-guage and a small bilingual (source-target) dictio-nary.Given our familiarity with Bengali and Hindi,and availability of a small treebank we aim totest our approach in Hindi-Bengali transfer pars-ing. We choose Hindi as the source language asit is syntactically related to Bengali and a Hinditreebank (Nivre et al., 2016) is freely availablewhich can be used to train a reasonably</s>
<s>accu-rate parser (Saha and Sarkar, 2016). We wishto use this Hindi treebank to develop a Bengaliparser. Although our current work aims to developa parser in Bengali from Hindi, this may be takenup as a general method for other resource poor lan-guages. We also have access to a monolingualcorpus in Bengali and a small bilingual (Hindi-99Bengali) dictionary.Since the vocabulary of two languages are dif-ferent, some of the work in the literature at-tempted to address this problem by delexicalizingthe dependency parsers by replacing the language-specific word-level features by more general part-of-speech or POS-level features. Such methodshave yielded moderate quality parsers in the targetlanguage (McDonald et al., 2011). However thenumber of POS features is small and may not con-tain enough information. In order to alleviate thisproblem some work have been proposed to incor-porate word-level features in the form of bi-lingualword clusters (Täckström et al., 2012) and otherbilingual word features (Durrett et al., 2012; Xiaoand Guo, 2014).Both Hindi and Bengali use the SOV (Subject-Object-Verb) sentence structure. However, thereexist differences in the morphological structure ofwords and phrases between these two languages(Chatterji et al., 2014). Since the overall syntacticstructure of the languages are similar, we hypoth-esize that chunk level transfer of a Hindi parserto Bengali may be more helpful than word-leveltransfer.The rest of the paper is organized as follows.Section 2 discusses some of the existing relatedwork. In Section 3 we state the objective of thiswork. In Section 4 we present in details the thedataset used, and in 5 we state in details our ap-proach for cross-lingual parsing. In Section 6 weanalyze the errors. Section 7 concludes the paper.2 Related workA variety of methods for developing transferparsers for resource poor languages without anytreebank have been proposed in the literature. Inthis section, we provide a brief survey of some ofthe methods relevant to our work.2.1 Delexicalized parsingDelexicalized parsing proposed by Zeman andResnik (2008) involves training a parser model ona treebank of a resource-rich language in a super-vised manner without using any lexical featuresand applying the model directly to parse sentencesin target language. They built a Swedish depen-dency parser using Danish, a syntactically similarlanguage. Søgaard (2011) used a similar methodfor several different language pairs. Their sys-tem performance varied widely (F1-score : 50%-75%) depending upon the similarity of the lan-guage pairs.Täckström et al. (2012) used cross-lingual wordclusters obtained from a large unlabelled corporaas additional features in their delexicalized parser.Naseem et al. (2012) proposed a method for mul-tilingual learning to languages that exhibit signif-icant differences from existing resource-rich lan-guages which selectively learns the features rele-vant for a target language and ties the model pa-rameters accordingly. Täckström et al. (2013) im-proved performance of delexicalized parser by in-corporating selective sharing of model parametersbased on typological information into a discrimi-native graph-based parser model.Distributed representation of words (Mikolov etal., 2013b) as vector can be used to capture cross-lingual lexical information and can be augmentedwith delexicalized parsers. Xiao and Guo (2014)learnt language-independent word representationsto address cross-lingual dependency parsing. Theycombined all sentences from both languages toinduce real-valued distributed representation ofwords under a deep neural network architecture,and then</s>
<s>use the induced interlingual word repre-sentation as augmenting features to train a delex-icalized dependency parser. Duong et al. (2015a)followed a similar approach where the vectors forboth the languages are learnt using a skipgram-likemethod in which the system was trained to predictthe POS tags of the context words instead of thewords themselves.2.2 Cross-lingual projectionCross-lingual projection based approaches useparallel data or some other lexical resource suchas dictionary to project source language depen-dency relations to target language (Hwa et al.,2005). Ganchev et al. (2009) used generative anddiscriminative models for dependency grammarinduction that use word-level alignments and asource language parser.McDonald et al. (2011) learnt a delexicalizedparser in English language and then used the En-glish parser to seed a constraint learning algorithmto learn a parser in the target language. Ma and Xia(2014) used word alignments obtained from paral-lel data to transfer source language constraints tothe target side.Rasooli and Collins (2015) proposed a methodto induce dependency parser in the target language100using a dependency parser in the source languageand a parallel corpus. Guo et al. (2015) proposeda CCA based projection method and a projectionmethod based on word alignments obtained fromparallel corpus.2.3 Parsing in Hindi and BengaliHindi and Bengali are morphologically rich andrelaively free word order languages. Some of thenotable works on Indian languages are by Bharatiand Sangal (1993) and Bharati et al. (2002).Also the works of Nivre (2005) and Nivre (2009)have been successfully applied for parsing In-dian languages such as Hindi and Bengali. Sev-eral works on Hindi parsing (Ambati et al., 2010;Kosaraju et al., 2010) used data-driven parserssuch as the Malt parser (Nivre, 2005) and theMST parser (Mcdonald et al., 2005). Bharati et al.(2009b) used a demand-frame based approach forHindi parsing. Chatterji et al. (2009) have shownthat proper feature selection (Begum et al., 2011)can immensely improve the performance of thedata-driven and frame-based parsers.Chunking (shallow parsing) has been used suc-cessfully to develop good quality parsers in Hindilanguage (Bharati et al., 2009b; Chatterji et al.,2012). Bharati et al. (2009b) have proposed a two-stage constraint-based approach where they firsttried to extract the intra-chunk dependencies andresolve the inter-chunk dependencies in the secondstage. Ambati et al. (2010) used disjoint sets de-pendency relation and performed the intra-chunkparsing and inter-chunk parsing separately. Chat-terji et al. (2012) proposed a three stage approachwhere a rule-based inter-chunk parsing followed adata-driven inter-chunk parsing.A project for building multi-representationaland multi-layered treebanks for Hindi andUrdu (Bhatt et al., 2009)1 was carried out as a jointeffort by IIIT Hyderabad, University of Coloradoand University of Washington. Besides the syn-tactic version of the treebank being developed byIIIT Hyderabad (Ambati et al., 2011), Universityof Colorado has built the Hindi-Urdu propositionbank (Vaidya et al., 2014) and a phrase-structureform of the treebank (Bhatt and Xia, 2012) is be-ing developed at University of Washington. A partof the Hindi dependency treebank2 has been re-leased in which the inter-chunk dependency re-1http://verbs.colorado.edu/hindiurdu/index.html2http://ltrc.iiit.ac.in/treebank_H2014/lations (dependency links between chunk heads)have been manually tagged and the chunks wereexpanded automatically using an arc-eager algo-rithm.Some of the major works on parsing inBengali language appeared in ICON 2009(http://www.icon2009.in/). Ghosh et al. (2009)used a CRF based hybrid method, Chatterji</s>
<s>et al.(2009) used variations of the transition based de-pendency parsing. Mannem (2009) came up witha bi-directional incremental parsing and percep-tron learning approach and De et al. (2009) useda constraint-based method. Das et al. (2012) com-pares performance of a grammar driven parser anda modified MALT parser.3 ObjectiveWe want to build a good dependency parser usingcross-lingual transfer method for some Indian lan-guages for which no treebanks are available. Wetry to make use of the Hindi treebank to build thedependency parser. We explore the use of the otherresources that we have.Due to our familiarity with Bengali languageand availability of a small treebank in Bengali weaim to perform our initial experiments in Bengalito test our proposed method. We have a smallHindi-Bengali bilingual dictionary and POS tag-gers, morphological analyzers and chunkers forboth these languages.In such a scenario delexicalization methods canbe used for cross-lingual parser construction. Wewish to get some understanding of what additionalresources can be used for general cross-lingualtransfer parsing in this framework depending onthe similarity and differences between the lan-guage pairs.4 Resources usedFor our experiments, we used the Hindi Uni-versal Dependency treebank to train the Hindiparser (Saha and Sarkar, 2016; Chen and Man-ning, 2014). The Hindi universal treebank con-sists of 16648 parse trees annotated using Univer-sal Dependency (UD) tagset divided into training,development and test sets. For testing in Bengaliwe used the test set of 150 parse trees annotatedusing Anncorra (Sharma et al., 2007) tagset. Thissmall Bengali treebank was used in ICON201033http://www.icon2010.in/101contest to train parsers for various Indian lan-gauges. The parse trees in the test data were par-tially tagged with only inter-chunk dependenciesand chunk information. We completed the trees bymanually annotating the intra-chunk dependenciesusing the intra-chunk tags proposed by Kosaraju etal. (2012). We used the complete trees for our ex-periments.Table 1 gives the details of the datasets used.Table 1: Universal Dependency Hindi treebank.DataUniversalDependencytreebank(Number of trees)ICONBengalitreebank(Number of trees)Training 13304 979Development 1659 150Test 1685 150The initial Hindi and Bengali word embeddingswere obtained by running word2vec (Mikolov etal., 2013b) on Hindi Wikipedia dump corpus andFIRE 20114 corpus respectively.For Hindi-Bengali word pairs we used a smallbilingual dictionary developed at our institute as apart of ILMT project5. It consists of about 12500entries. For chunking we used the chunkers andchunk-head computation tool developed at our in-stitute. The sentences in the Hindi treebank werechunked using an automatic chunker to obtain thechunk-level features. In case of disagreement be-tween the output of automatic chunker and thegold standard parse trees we adhered to the chunkstructure of the gold standard parse tree.Before parsing the Hindi trees we relabeled theHindi treebank sentences by Anncorra (Sharma etal., 2007) POS and morphological tags using thePOS tagger (Dandapat et al., 2004) and morpho-logical analyzer (Bhattacharya et al., 2005) as theautomatic chunker requires the POS and morpho-logical information in Anncorra format. More-over, due to relabeling both the training and thetest data will have the POS and morphological fea-tures in Anncorra format.5 Our proposed Hindi to Bengalicross-lingual dependency parser5.1 Baseline delexicalization based methodFor the delexicalized baseline we trained the Hindiparser using only POS features. We used this4http://www.isical.ac.in/ clia/2011/5http://ilmt.iiit.ac.in/ilmt/index.phpmodel directly to parse the</s>
<s>Bengali test sentences.It gives an UAS (Unlabelled Attachment Score) of65.1% (Table 2).We report only the UAS because the Bengaliarc labels uses AnnCorra tagset which is differ-ent from Universal Dependency tagset. The de-pendency lables in the UD and ICON treebanksare different, with ICON providing a more fine-grained and Indian language specific tags. How-ever, it was observed that the unlabelled depen-dencies were sufficiently similar.5.2 Transferred parser enhanced with lexicalfeaturesWhen the parser trained using the lexical featuresof one language is used to parse sentences in an-other language the performance depends on thelexical similarity between the two languages.We wish to investigate whether it is possible touse the syntactic similarities of the words to trans-fer some information to the Bengali parser alongwith the non-lexical information. We have usedword embeddings (Mikolov et al., 2013b) for thelexical features in the hope that the word vectorscapture sufficient lexical information.Our work is different from that of (Xiao andGuo, 2014) and (Duong et al., 2015b) where theword vectors for both the languages are jointlytrained. We observed that the work of (Xiao andGuo, 2014) is dependent on the quality and size ofthe dictionary and the training may not be uniformdue to the difference in frequency of the wordsoccurring in the corpus on which the vectors aretrained. It also misses out the words that have mul-tiple meanings in the other language.Our method has the following steps;Step 1 - Learning monolingual word em-beddings : The monolingual word embeddingsfor Hindi and Bengali are learnt by trainingword2vec (Mikolov et al., 2013b) on monolingualHindi and Bengali corpus respectively. The di-mension of the learnt word embeddings are set to50.Step 2 - Training the Hindi monolingual depen-dency parser : To train the Hindi parser modelusing the Hindi treebank data we used the parserproposed by Chen and Manning (2014). The wordembeddings were initialized by the ones learntfrom monolingual corpus. Apart from the wordembeddings, the other features are randomly ini-tialized.102Step 3 - Learning interlingual word representa-tions using linear regression based projection:For learning interlingual word representations weused all the cross-lingual word pairs from a Hindi-Bengali dictionary and dropped the Hindi wordswhose corresponding entry in Bengali is of mul-tiple words. We used only those word pairs forwhich both the words are in the vocabulary of thecorresponding monolingual corpora on which theword embeddings were trained. The linear regres-sion method (Mikolov et al., 2013a) was used toproject the Bengali word embeddings into the vec-tor space of the Hindi embeddings obtained aftertraining the parser on Hindi treebank data. The re-gressor was trained using the embeddings of the3758 word-pairs obtained from the dictionary.Subsequently, we attempted to compare themethod proposed by Xiao and Guo (2014). In boththe cases the parser performances were very sim-ilar and hence we report only the results obtainedusing linear regression.Step 4 - Transfer of parser model from Hindito Bengali : In the delexicalized version, theparsers are used directly to test on Bengali data. Inthe lexicalized versions, we obtained the Bengaliparser models by replacing the Hindi word embed-dings by the projected Bengali word vectors ob-tained in Step 3. The transformation is shown infigure 1.Table</s>
<s>2: Comparison of 1) delexicalized parsermodel and 2) parser using projected Bengali vec-tors.Delexi-calized(Baseline)ProjectedBengalivectors(Chen and Manning, 2014)parser 65.1 67.2Table 2 compares the UAS of word-level trans-fer for the 1) delexicalized parser model (Delex-icalized) and 2) the lexicalized Bengali parsermodel in which the Hindi word embeddings are re-placed by Bengali word vectors projected onto thevector space of the Hindi word embeddings (Pro-jected Bengali vectors). We observe that projectedlexical features improves UAS over the delexical-ized baseline from 65.1 to 67.2.5.3 Chunk-level transfer for cross-lingualparsingThere exist differences in the morphological struc-ture of words and phrases between Hindi and Ben-gali. For example, the English phrase "took bath"is written in Hindi as "nahayA" using a singleword and the same phrase in Bengali is written as"snan korlo" "(bath did)" using two words. Sim-ilarly, the English phrase "is going" is written inHindi as "ja raha hai" "(go doing is)" using threewords and the same phrase in Bengali is written as"jachhe" using a single word.This makes us believe that chunking can help toimprove cross-lingual parsing between Hindi andBengali languages by using the similarities in thearrangement of phrases in a sentence. Chunking(shallow parsing) reduces the complexity of fullparsing by identifying non-recursive cores of dif-ferent types of phrases in the text (Peh and Ann,1996). Chunking is easier than parsing and bothrule-based chunker or statistical chunker can bedeveloped quite easily.In figure 2 we present a Bengali sentence andthe corresponding Hindi sentence. They aretransliterated to Roman. The English gloss of thesentences are given. We indicate by parenthesesthe chunks of the sentences. We indicate by linethe correspondence between the chunks. We seethat the correspondence is at the chunk level andnot at the word level.The sentences are quite similar as far as theinter-chunk orientation is concerned as is evidentfrom the Figure 3 and 4.We have used Hindi and Bengali chunkerswhich identify the chunks and assign each chunkto its chunk type, chunk-level morphological fea-tures and the head words. For chunk level transferwe performed the following steps:Step 1: We chunked the Hindi treebank sen-tences and extracted the chunk heads.Step 2: We converted the full trees to chunkhead trees by removing the non-head words andtheir links such that only the chunk head wordsand their links with the other head words are left.Step 3: We trained the Hindi dependencyparsers using the Hindi chunk head trees by thedelexicalization method and the method describedin section 5.2.103WORDEMBEDDINGSPOSEMBEDDINGSARCEMBEDDINGSHIDDEN LAYEROUTPUT LAYER(SOFTMAX)WORDEMBEDDINGSPOSEMBEDDINGSARCEMBEDDINGSHIDDEN LAYEROUTPUT LAYER(SOFTMAX)Epos EarcEsourceword EposEtargetwordEarcSource language parser Target language parserCONFIGURATION(STACK, BUFFER,ARCS)MAPPING LAYER (INDEX TOEMBEDDINGS)CONFIGURATION(STACK, BUFFER,ARCS)MAPPING LAYER (INDEX TOEMBEDDINGS)ProjectedusinglinearregressionFigure 1: The neural network shares parameters like weights and POS, arc-label embeddings in sourceand target language parser models. Only the source language word embeddings replaced by projectedtarget language word vectors. Esourceword , Etargetword , EPOS , Earc are the embedding matrices from which themapping layer gets the vectors by indexing.(Patna-at) (earthquake-of result-in) (99 number person) (death happened)(Patna at) (earthquake of result) (99 persons) (died)(Patnay) (bhumikamper phale) (99 jon lok) (mara jay)(Patna mein) (bhukamp ke dwara) (99 admi) (mare)2 1(99 number person)BE. Bengali Sen-tence in Englishgloss:HE.Hindi Sen-tence in Englishgloss:H.Hindi Sentence:B. Bengalisentence:Figure 2: Chunk mapping between a Bengali</s>
<s>and Hindi sentence that conveys the same meaning : "99people died due to earthquake in Patna".maraPatnay bhumikamper lok jayfale jon 99(a)marePatna bhukamp admike dwara 99mein(b)Figure 3: Word-level parse trees of the example Bengali and Hindi sentences (a) Bengali word-levelparse tree (b) Hindi word-level parse tree(i)Patnay (ii)bhumikamperfale(iv)mara jay(iii)99 jon lok(a)(i)Patnamein(ii)bhumkampke dwara(iv)mare(iii) 99 admi(b)Figure 4: Chunk-level parse trees of the example Bengali and Hindi sentences (a) Bengali chunk-levelparse tree (b) Hindi chunk-level parse treeStep 4: This parser was transferred using themethods described in section 5.2 to get the delexi-calized parser for Bengali head trees.104(iv)mara(i)Patnay (ii)bhumikamper (iii)lok(a)maraPatnay bhumikamper lok jayfale jon 99(b)Figure 5: Chunk-level parse tree of the example Bengali sentence before and after expansion (a) Bengalichunk head parse tree (b) Bengali chunk head parse tree after expansionStep 5: For testing, we parsed the Bengalitest sentences consisting of only the chunk headwords. The UAS score for head trees obtained bydelexicalized method is 68.6.Step 6: For intra-chunk expansion we simply at-tached the non-head words to their correspondingchunk heads to get the full trees (This introducesa lot of errors. In future we plan to use rules forchunk expansion to make the intra-chunk expan-sion more accurate.) The UAS score for trees afterintra-chunk expansion is 78.2.We observed that our simple heuristic for inter-chunk expansion increases accuracy of the parser.There are some rule-based methods and statisticalapproach for inter-chunk expansion (Kosaraju etal., 2012; Bharati et al., 2009a; Chatterji et al.,2012) in Hindi which may be adopted for Bengali.Table 3: Comparison of word-level and chunk-level transfer of parse treesDelexical-izedProjectedBengalivectorsTreesafter word-leveltransfer 65.1 67.2Expanded chunkhead trees afterchunk-level transfer 78.2 75.8Table 3 compares the UAS of baseline parsersfor word-level transfer with chunk-level transferfollowed by expansion. We found a significant in-crease of UAS score from 65.1 to 78.2 after pars-ing and subsequent intra-chunk expansion. How-ever, while using common vector-based word rep-resentation had shown slight improvement whenapplied to the word level transfer it did not helpwhen applied to chunk level transfer. This maybe because we used only the vector embeddingsof chunk heads for the chunk-level parsing. Wewish to work further on vector representation ofchunks which might capture more chunk-level in-formation and help improve the results.While chunking has been used with otherparsers, we did not find any work that uses chunk-ing in a transfer parser. The source (Hindi) delex-icalized word-level parser gave an accuracy of77.7% and the source (Hindi) delexicalized chunk-level parser followed by expansion gave an accu-racy of 79.1% on the UD Hindi test data.There is no reported work on cross-lingualtransfer between Bengali and Hindi. But as a ref-erence we will like to mention the type of UASaccuracy values reported for other transfer parsersbased on delexicalization in the literature for otherlanguage pairs. Zeman and Resnik (2008)’s delex-icalized parser gave a F-score of 66.4 on Danishlanguage. Täckström et al. (2012) achieved an av-erage UAS of 63.0 by using word clusters on tentarget languages and English as the source lan-guage. They achieved UAS of 57.1 without us-ing any word cluster feature. In their works, (Xiaoand Guo, 2014) tried out cross-lingual parsingon a set of eight target languages</s>
<s>with Englishas the source language and achieved a UAS of58.9 on average while their baseline delexicalizedMSTParser parser using universal POS tag fea-tures gave an UAS of 55.14 on average. Duong etal. (2015b) also applied their method on nine tar-get languages and English as the source language.They achieved UAS of 58.8 on average.6 Error analysisWe analyzed the errors in dependency relations ofthe parse trees obtained by parsing the test sen-tences. We analyze the results based on the num-ber of dependency relations in the gold data thatactually appear in the trees parsed by our parser.We report results of the ten most frequent depen-dency tags in table 4.From table 4 we find that chunk-level transferincreases the accuracy of tree root identification.Chunk-level transfer significantly increases the ac-105Table 4: Comparison of errors for 12 dependency tags. The entries of column 3 to 6 indicates the numberof dependencies bearing the corresponding tags in the gold data that actually appear in the parsed treesand the accuracy (in %).ActualCountdependencyrelationsDelexicalizedword-leveltransferWord-leveltransferusingprojectedBengalivectorsDelexicalizedchunk-leveltransferChunk-leveltransferusingprojectedBengalivectorsk1 (doer/agent/subject) 166 111 (66.9) 104 (62.7) 133 (80.1) 118 (71.1)vmod (Verb modifier) 111 71 (64.0) 78 (70.3) 85 (76.6) 71 (64.0)main (root) 150 96 (64.4) 108 (72.5) 105 (70.5) 103 (69.1)k2 (object) 131 100 (76.3) 92 (70.2) 104 (79.4) 88 (67.2)r6 (possessive) 82 21 (25.6) 49 (59.8) 13 (15.9) 52 (63.4)pof (Part of relation) 59 55 (93.2) 58 (98.3) 56 (94.9) 56 (94.9)k7p (Location in space) 50 31 (62.0) 30 (60.0) 38 (76.0) 33 (66.0)ccof (co-ordinate conjunction of) 47 1 (2.1) 4 (8.5) 1 (2.12) 2 (4.3)k7t (Location in time) 40 25 (62.5) 20 (50.0) 31 (77.5) 15 (37.5)k7 (Location elsewhere) 22 15 (68.2) 14 (63.6) 16 (72.7) 17 (77.3)k1s (noun complement) 18 13 (72.2) 14 (77.8) 14 (77.8) 14 (77.8)relc (relative clause) 12 1 (8.4) 1 (8.4) 0 (0.0) 0 (0.0)curacy of identifying the relations with k1, vmod,k2 and k7 tags also.Although delexicalized chunk-level parser givesthe overall best result, the accuracy is lowest forthe relation of type r6 (possessive/genitive). Weobserved that in most of the erroneous cases, boththe words that are expected to be connected bythe r6 dependency, are actually being predicted asmodifiers of a common parent. We find that the ac-curacy of r6 tag improves in case of delexicalizedword-level transfer and the best accuracy on r6 isachieved with the use of lexical features. Hence,the drop in performance may be due to the lack ofsufficient information in the case of chunk-leveltransfer or the chunk expansion heuristic that wehave used this work.However, for all the methods discussed abovethe parser performs poorly in identifying the “con-juction of" (ccof ) relations and relative clause(relc) relations. we observed that the poor resulton ccof tag is due to the difference in annotationscheme of ICON and UD. In case of ICON data,the conjunctions are the roots of the trees and thecorresponding verbs or nouns are the modifiers,while in UD scheme the conjunctions are the mod-ifiers of the corresponding verbs of nouns. Weneed to investigate further into the poor identifi-cation of relc dependencies.7 ConclusionWe show that knowledge of shallow syntacticstructures of the languages helps in</s>
<s>improvingthe quality of cross-lingual parsers. We observethat chunking significantly improves cross-lingualparsing from Hindi to Bengali due to their syntac-tic similarity at the phrase level. The experimen-tal results clearly shows that chunk-level transferof parser model from Hindi to Bengali is betterthan direct word-level transfer. This also goes toestablish that one can improve the performanceof pure statistical systems if one additionally usessome linguistic knowledge and tools. The initialexperiments were done in Bengali. In future weplan to broaden the results to include other Indianlanguages for which open source chunkers can befound.ReferencesBharat Ram Ambati, Samar Husain, Sambhav Jain,Dipti Misra Sharma, and Rajeev Sangal. 2010. Twomethods to incorporate ‘local morphosyntactic’ fea-tures in Hindi dependency parsing. In Proceedingsof the NAACL HLT 2010 First Workshop on SPMRL,pages 22–30, Los Angeles, CA, USA, June. Associ-ation for Computational Linguistics.Bharat Ram Ambati, Rahul Agarwal, Mridul Gupta,Samar Husain, and Dipti Misra Sharma. 2011. Er-ror detection for treebank validation. Asian Lan-guage Resources collocated with IJCNLP 2011,page 23.Rafiya Begum, Karan Jindal, Ashish Jain, Samar Hu-sain, and Dipti Misra Sharma. 2011. Identificationof conjunct verbs in hindi and its effect on parsingaccuracy. In International Conference on Intelli-gent Text Processing and Computational Linguistics,pages 29–40. Springer.106Akshar Bharati and Rajeev Sangal. 1993. Parsing freeword order languages in the paninian framework. InProceedings of the 31st Annual Meeting on ACL,ACL ’93, pages 105–111, Stroudsburg, PA, USA.Association for Computational Linguistics.Akshar Bharati, Rajeev Sangal, and T Papi Reddy.2002. A constraint based parser using integer pro-gramming. Proc. of ICON.Akshar Bharati, Mridul Gupta, Vineet Yadav, KarthikGali, and Dipti Misra Sharma. 2009a. Simple parserfor indian languages in a dependency framework.In Proceedings of the Third Linguistic AnnotationWorkshop, pages 162–165, Suntec, Singapore, Au-gust. Association for Computational Linguistics.Akshar Bharati, Samar Husain, Meher Vijay, KalyanDeepak, Dipti Misra Sharma, and Rajeev Sangal.2009b. Constraint based hybrid approach to pars-ing indian languages. In Proceedings of the 23rdPACLIC, pages 614–621, Hong Kong, December.City University of Hong Kong.Rajesh Bhatt and Fei Xia. 2012. Challenges in con-verting between treebanks: a case study from thehutb. In META-RESEARCH Workshop on AdvancedTreebanking, page 53.Rajesh Bhatt, Bhuvana Narasimhan, Martha Palmer,Owen Rambow, Dipti Misra Sharma, and Fei Xia.2009. A multi-representational and multi-layeredtreebank for hindi/urdu. In Proceedings of the ThirdLinguistic Annotation Workshop, ACL-IJCNLP ’09,pages 186–189, Stroudsburg, PA, USA. Associationfor Computational Linguistics.Samit Bhattacharya, Monojit Choudhury, SudeshnaSarkar, and Anupam Basu. 2005. Inflectional mor-phology synthesis for bengali noun, pronoun andverb systems. In In Proceedings of the national con-ference on computer processing of Bangla (NCCPB,pages 34–43.Sanjay Chatterji, Praveen Sonare, Sudeshna Sarkar,and Devshri Roy. 2009. Grammar driven rules forhybrid bengali dependency parsing. Proceedings ofICON09 NLP Tools Contest: Indian Language De-pendency Parsing, Hyderabad, India.Sanjay Chatterji, Arnad Dhar, Sudeshna Sarkar, andAnupam Basu. 2012. A three stage hybrid parserfor hindi. In Proceedings of the Workshop on MT-PIL, pages 155–162, Mumbai, India, December. TheCOLING 2012 Organizing Committee.Sanjay Chatterji, Tanaya Mukherjee Sarkar, PragatiDhang, Samhita Deb, Sudeshna Sarkar, JayshreeChakraborty, and Anupam Basu. 2014. A depen-dency annotation scheme for bangla treebank. Lan-guage Resources and Evaluation, 48(3):443–477.Danqi Chen and Christopher Manning. 2014. Afast and accurate dependency parser using neuralnetworks. In Proceedings of the 2014 EMNLP,pages 740–750, Doha, Qatar, October. Associationfor Computational Linguistics.S. Dandapat, S.</s>
<s>Sarkar, and A. Basu. 2004. A hybridmodel for part-of-speech tagging and its applicationto bengali.Arjun Das, Arabinda Shee, and Utpal Garain. 2012.Evaluation of two bengali dependency parsers. InProceedings of the Workshop on MTPIL, pages 133–142, Mumbai, India, December. The COLING 2012Organizing Committee.Sankar De, Arnab Dhar, and Utpal Garain. 2009.Structure simplification and demand satisfaction ap-proach to dependency parsing for bangla. In Proc.of 6th ICON tool contest: Indian Language Depen-dency Parsing, pages 25–31.Long Duong, Trevor Cohn, Steven Bird, and PaulCook. 2015a. Cross-lingual transfer for unsu-pervised dependency parsing without parallel data.In Proceedings of the Nineteenth Conference onCONLL, pages 113–122, Beijing, China, July. As-sociation for Computational Linguistics.Long Duong, Trevor Cohn, Steven Bird, and PaulCook. 2015b. Low resource dependency parsing:Cross-lingual parameter sharing in a neural networkparser. Volume 2: Short Papers, page 845.Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syn-tactic transfer using a bilingual lexicon. In Proceed-ings of the 2012 Joint Conference on EMNLP andCoNLL, pages 1–11, Jeju Island, Korea, July. Asso-ciation for Computational Linguistics.Kuzman Ganchev, Jennifer Gillenwater, and BenTaskar. 2009. Dependency grammar induction viabitext projection constraints. ACL ’09, pages 369–377, Stroudsburg, PA, USA. Association for Com-putational Linguistics.Aniruddha Ghosh, Pinaki Bhaskar, Amitava Das, andSivaji Bandyopadhyay. 2009. Dependency parserfor bengali: the ju system at icon 2009. In Proc.of 6th ICON tool contest: Indian Language Depen-dency Parsing, pages 7–11.Jiang Guo, Wanxiang Che, David Yarowsky, HaifengWang, and Ting Liu. 2015. Cross-lingual depen-dency parsing based on distributed representations.In Proceedings of the 53rd ACL and the 7th IJCNLP,volume 1, pages 1234–1244.Rebecca Hwa, Philip Resnik, Amy Weinberg, ClaraCabezas, and Okan Kolak. 2005. Bootstrappingparsers via syntactic projection across parallel texts.Natural Language Engineering, 11:11–311.Prudhvi Kosaraju, Sruthilaya Reddy Kesidi, VinayBhargav Reddy Ainavolu, and Puneeth Kukkadapu.2010. Experiments on indian language dependencyparsing. Proceedings of the ICON10 NLP ToolsContest: Indian Language Dependency Parsing.Prudhvi Kosaraju, Bharat Ram Ambati, Samar Hu-sain, Dipti Misra Sharma, and Rajeev Sangal. 2012.Intra-chunk dependency annotation : Expanding107hindi inter-chunk annotated treebank. In Proceed-ings of the Sixth Linguistic Annotation Workshop,pages 49–56, Jeju, Republic of Korea, July. Asso-ciation for Computational Linguistics.Xuezhe Ma and Fei Xia. 2014. Unsupervised depen-dency parsing with transferring distribution via par-allel guidance and entropy regularization. In Pro-ceedings of the 52nd Annual Meeting of the ACL(Volume 1: Long Papers), pages 1337–1348, Bal-timore, Maryland, June. Association for Computa-tional Linguistics.Prashanth Mannem. 2009. Bidirectional dependencyparser for hindi, telugu and bangla. Proceedings ofICON09 NLP Tools Contest: Indian Language De-pendency Parsing, Hyderabad, India.Ryan Mcdonald, Fernando Pereira, Kiril Ribarov, andJan Hajič. 2005. Non-projective dependency pars-ing using spanning tree algorithms. In In Proceed-ings of HLT Conference and Conference on EMNLP,pages 523–530.Ryan McDonald, Slav Petrov, and Keith Hall. 2011.Multi-source transfer of delexicalized dependencyparsers. In Proceedings of the Conference onEMNLP, EMNLP ’11, pages 62–72, Stroudsburg,PA, USA. Association for Computational Linguis-tics.Tomas Mikolov, Quoc V. Le, and Ilya Sutskever.2013a. Exploiting similarities among languages formachine translation. CoRR, abs/1309.4168.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor-rado, and Jeff Dean. 2013b. Distributed represen-tations of words and phrases and their composition-ality. In NIPS 26, pages 3111–3119. Curran Asso-ciates, Inc.Tahira Naseem, Regina Barzilay, and Amir Globerson.2012. Selective sharing for multilingual dependencyparsing. In Proceedings of the 50th Annual Meet-ing of the ACL:</s>
<s>Long Papers - Volume 1, ACL ’12,pages 629–637, Stroudsburg, PA, USA. Associationfor Computational Linguistics.Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin-ter, Yoav Goldberg, Jan Hajic, Christopher D. Man-ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo,Natalia Silveira, Reut Tsarfaty, and Daniel Zeman.2016. Universal dependencies v1: A multilingualtreebank collection. In Proceedings of the Tenth In-ternational Conference on Language Resources andEvaluation (LREC 2016), may.Joakim Nivre. 2005. Dependency grammar and depen-dency parsing. Technical report, Växjö University.Joakim Nivre. 2009. Parsing indian languages withMaltParser. In Proceedings of the ICON09 NLPTools Contest: Indian Language Dependency Pars-ing, pages 12–18.Li-Shiuan Peh and Christopher Ting Hian Ann. 1996.A divide-and-conquer strategy for parsing. CoRR,cmp-lg/9607020.Mohammad Sadegh Rasooli and Michael Collins.2015. Density-driven cross-lingual transfer of de-pendency parsers. In Proceedings of the 2015 Con-ference on EMNLP, pages 328–338, Lisbon, Portu-gal, September. Association for Computational Lin-guistics.Agnivo Saha and Sudeshna Sarkar. 2016. Enhanc-ing neural network based dependency parsing usingmorphological information for hindi. In 17th CI-CLing, Konya, Turkey, April. Springer.D.M. Sharma, Sangal R., L. Bai, R. Begam, and K. Ra-makrishnamacharyulu. 2007. Anncorra : Tree-banks for indian languages, annotation guidelines(manuscript).Anders Søgaard. 2011. Data point selection for cross-language adaptation of dependency parsers.Oscar Täckström, Ryan McDonald, and Jakob Uszko-reit. 2012. Cross-lingual word clusters for directtransfer of linguistic structure. In Proceedings of the2012 Conference of the NAACL: HLT, NAACL HLT’12, pages 477–487, Stroudsburg, PA, USA. Associ-ation for Computational Linguistics.Oscar Täckström, Ryan T. McDonald, and JoakimNivre. 2013. Target language adaptation of discrim-inative transfer parsers. pages 1061–1071.Jörg Tiedemann. 2015. Improving the cross-lingualprojection of syntactic dependencies. In Proceed-ings of the 20th Nordic Conference of Computa-tional Linguistics (NODALIDA 2015), pages 191–199, Vilnius, Lithuania, May. Linköping UniversityElectronic Press, Sweden.Ashwini Vaidya, Owen Rambow, and Martha Palmer.2014. Light verb constructions with ‘do’and ‘be’inhindi: A tag analysis. In Workshop on Lexical andGrammatical Resources for Language Processing,page 127.Min Xiao and Yuhong Guo. 2014. Distributed wordrepresentation learning for cross-lingual dependencyparsing. In Proceedings of the Conference on Natu-ral Language Learning (CoNLL).Min Xiao and Yuhong Guo. 2015. Annotationprojection-based representation learning for cross-lingual dependency parsing. In Proceedings of theNineteenth Conference on Computational NaturalLanguage Learning, pages 73–82, Beijing, China,July. Association for Computational Linguistics.D. Zeman and Philip Resnik. 2008. Cross-languageparser adaptation between related languages. NLPfor Less Privileged Languages, pages 35 – 35.108</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/323059451Evaluation of machine translation approaches to translate English to BengaliConference Paper · December 2017DOI: 10.1109/ICCITECHN.2017.8281851CITATIONSREADS1124 authors, including:Some of the authors of this publication are also working on these related projects:Credit Card Anomaly Detection View projectBengali to English Machine Translation (Undergraduate Thesis Project) View projectShamsun NaharWorld University of Bangladesh4 PUBLICATIONS 4 CITATIONS SEE PROFILEMohammad Nurul HudaUnited International University101 PUBLICATIONS 384 CITATIONS SEE PROFILEMd. Nur-E-ArefinRoyal University of Dhaka3 PUBLICATIONS 3 CITATIONS SEE PROFILEAll content following this page was uploaded by Md. Nur-E-Arefin on 28 February 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/323059451_Evaluation_of_machine_translation_approaches_to_translate_English_to_Bengali?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/323059451_Evaluation_of_machine_translation_approaches_to_translate_English_to_Bengali?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Credit-Card-Anomaly-Detection?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bengali-to-English-Machine-Translation-Undergraduate-Thesis-Project?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shamsun_Nahar6?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shamsun_Nahar6?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/World_University_of_Bangladesh?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shamsun_Nahar6?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Huda2?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Huda2?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/United_International_University?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Huda2?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Nur-E-Arefin?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Nur-E-Arefin?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Royal_University_of_Dhaka?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Nur-E-Arefin?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Nur-E-Arefin?enrichId=rgreq-ad1c1ff2524cf4653517128ebe73ecf8-XXX&enrichSource=Y292ZXJQYWdlOzMyMzA1OTQ1MTtBUzo4NjM2MTY0MzM2NjQwMDBAMTU4MjkxMzYxMTEyMA%3D%3D&el=1_x_10&_esc=publicationCoverPdf2017 20th International Conference of Computer and Information Technology (ICCIT), 22-24 December, 2017 978-1-5386-1150-0/17/$31.00 ©2017 IEEE Evaluation of Machine Translation Approaches to Translate English to Bengali Shamsun Nahar Dept. of CSE World University of Bangladesh Dhaka, Bangladesh shamsun_nahar@ymail.com Mohammad Nurul Huda Dept. of CSE United International University Dhaka, Bangladesh mnh@cse.uiu.ac.bd Md. Nur-E-Arefin, Mohammad Mahbubur Rahman Dept. of IT IIT, University of Dhaka Dhaka, Bangladesh sami.arefin@gmail.com Abstract—This paper describes the different types of machine translation (MT) approaches, where MT refers to the use of computers for the task of translating automatically from one language to another. It is highly challenging to build up a proper MT system which will works with full accuracy for translating foreign languages to native languages but this paper aims at providing a solution that could be helpful for building a MT system which will convert the English sentences into Bengali. Moreover, total 12 tenses such as- present indefinite, continuous, perfect, perfect continuous; past indefinite, continuous, perfect, perfect continuous; future indefinite, continuous, perfect and perfect continuous are used for the purpose of translating English sentence into Bengali that will require finding out the meaning from our own database. After comparing the experimental results based on different machine translation approaches with Google translator, it is found that one of our investigated as well as implemented methods, Corpus approach, provides higher accuracy in comparison with Google translator and other implemented methods. Keywords—Machine Translation; Machine learning; Natural Language Processing; Language Translation I. INTRODUCTION Bengali also known as Bangla is the mother tongue of Bangladesh. More than 220 million people speak in Bengali and it is ranked 7th most spoken language in the whole world. Bengali is also used in eastern area of India (West Bengal and Kolkata) as the medium of speaking and writing. Numerous researches have done in the area of language translation but Natural Language Processing (NLP) is a quite tough job because of fully successful language translation machine. Natural languages are highly complex, mentioning that, words may have different meanings along with various use and translations, sentences may have distinct readings, and ambiguous relationships among linguistic entities. Since, it is a Human Language Technology (HLT) thus there are enormous prospects for doing research in this field. In fact, it is impossible to study on the whole language translation process at a time. As a result it requires to be segmented into many parts. Moreover, there is</s>
<s>also another dilemma that most of them select a part of the source language for translating to the target language. In this paper English has been used as the source language and Bengali has been used as the target language because there are different types of sentences in both of these languages but the main focus of this paper is to evaluate some machine translation approaches by implementing them. So far, a very few researches have done on English to Bengali language translation both in Bangladesh and West Bengal of India. Only the present indefinite and present continuous forms of English sentences are concerned in [1]. They represent a simple algorithm for language translation. Only one paper considered all forms of tenses [2]. Using Artificial Intelligence (AI) a Natural Language Processing (NLP) algorithm is proposed in [3]. In [4], Cockey-Younger-Kasami (CYK) algorithm is used for language translation where they used normal parse tree than the Chomsky Normal Form (CNF) parse tree because of some problems during the transformation phase. Morphological analysis is done in [5] where morphemes means minimal unit of meaning of grammatical analysis. A phrasal Example Based Machine Translation (EBMT) is described in [6]. Adaptive rule based machine translation between English to Bengali is used in [7]. In this paper they concentrated on rules which they found by proper translation from English to Bengali. Comprehensive Roman (English) to Bengali transliteration is defined in [8]. They actually designed a phonetics lexicon based English-Bengali transliteration. Verb based machine translation (VBMT), a new approach of machine translation (MT) from English to Bangla is proposed in [9]. II. MACHINE TRANSLATION Machine translation (MT) is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another. There are various approaches to Machine Translation. i) Word-for-Word translation, ii) The direct approach, iii) Transfer approach, iv) Corpus-based approach, v) Interlingua approach and vi) Statistical Machine Translation (SMT). A. Word-for-Word Translation Use a machine readable bilingual dictionary to translate each word in a text. An Example is given in Table I. TABLE I. DICTIONARY English Bengali I আিম Eat খাই Rice ভাত The advantages of this approach are Easy to implement, results gives a rough idea about what the text is about and the disadvantages are problems with word order means that this result in low quality translation rent designations. B. Transfer Approach The transfer model involves three stages a) Analysis, b) Transfer and c) Generation. In analysis stage the source language sentence is parsed and the sentence structure and the constituents of the sentence are identified. Example: I eat rice. Here words are: I, eat, rice and the sentence structure: [subject] [verb] [object]. In transfer stage transformation are applied to the source language parse tree to convert the structure to that of the target language (Fig. 1). Although there is some kind of ‘transfer’ in any translation system, the term transfer method applies to those which have bilingual modules between intermediate representations of each of</s>
<s>the two languages [10]. Fig. 1. Three stages of Transfer Approach C. Direct Approach The most primitive strategy is called the direct MT strategy, which is always between pairs of languages and based on good glossaries and morphological analysis. The direct approach lacks any kinds of intermediate stages in translation processes: the processing of the source language input text leads 'directly' to the desired target language output text [10]. Direct Approach has five steps to translate. Example Sentence: You are playing football. 1. Morphological analysis: You playing Present Continuous football 2. Identify constituents: <You> <playing Present Continuous > <football> 3. Reorder according to target language: <You> <football> <playing Present Continuous > 4. Look-up in the source target language dictionary: <তিম> <ফুটবল> < খলছ> 5. Inflect: তিম ফুটবল খলছ D. Corpus-based Approach In corpus based MT (CBMT) approach two parallel corpora are available in source language (SL) and target language (TL) where sentences are aligned. First it is done by matching fragments against the parallel corpus and then adopting the method to the TL. Finally reassembling these translated fragments appropriately and then translation principle are applied. Fig. 2 shows an example. Corpus-based Approach entails three steps: 1. Matching fragments against the parallel training corpora. 2. Adapting the matched fragments to the target language. 3. Recombine these translated fragments appropriately. Fig. 2. Corpus-Based Approach E. Interlingua Approach The most advanced system is called the Interlingua MT strategy. In Interlingua method, the source text is analyzed in a representation from which the target text is directly generated. The intermediate representation includes all information necessary for the generation of the target text without 'looking back' to the original text [10]. The idea behind this approach is to create an artificial language, known as the Interlingua, which shares all the features and makes all the distinctions of all languages. To translate between two different languages, an analyzer is used to put the source language into the Interlingua, and a generator converts the Interlingua into the target language. Two stages to follow for Interlingua Approach: 1. Extracting the meaning of a source language sentence in a language-independent form. 2. Generating a target language sentence from the meaning. Fig. 3. Interlingua Approach Source Sentence: He bought a huge houseHe bought a huge house স এক ট বড় বািড় িকেনিছল F. Statistical Machine Translation (SMT) SMT models take the view that every sentence in the target language (TL) is a translation of source language (SL) sentence with some probability. SMT systems also deduce language and translation models from very large quantities of monolingual and bilingual data using a range of theoretical approaches to probability distribution and estimation [11]. The best translation of sentence is that which has the highest probability. In SMT three major components are, language model, translation model, search algorithm. If t-target language and s-source language then we can write, ( / ) = ( / ) ( )/ ( ) (1) where ( / ) depends on the ( ) which is probability of the</s>
<s>kind of sentences that are likely to be in the language . This is known as the language model ( ). The way sentences in s get converted to the sentences is called translation model ( / ). III. IMPLEMENTED METHODS Three (3) implemented methods of machine translation approach-i) Direct approach, ii) Corpus approach and iii) Transfer approach will be discussed here. These implemented methods can deal with multiline inputs and not case sensitive. Besides, all 12 tenses were taken into consideration while implementing these methods. Different sentence pattern or structure can be dealt with these implemented methods. How sentence structure can change the meaning of a sentence is shown in the following Table II: TABLE II. SENTENCE STRUCTURE I play football আিম ফুটবল খিল I have played football আিম ফুটবল খেলিছ I have a football আমার ফুটবল আেছ You play football তিম ফুটবল খেলা You have a football তামার এক ট ফুটবল আেছHe play football স ফুটবল খেল He has a football তার এক ট ফুটবল আেছ From the table we can see that, if there is any verb after the subject or auxiliary verb then the meaning of “I” will be “আিম” but if there is no verb after auxiliary verb then the meaning of “I” will be “আমার”. Same sentence structure is also applicable for 2nd person and 3rd person which are clearly seen from the table 2. These implemented systems can also deals with negative sentences and sentences with more than one object. A. Direct approach In direct approach the output for the sentence ‘I am playing football in the field’ is: Fig. 4. Example of Direct Approach The advantages of direct approach we get output more accurately (small amount of data set). But in case of huge data set this approach cannot achieve best result. B. Corpus-based approach The main advantage of this method is finding out the senses of words and phrases in different contexts in a speedy way. Moreover language users will find it very profitable as a corpus can provide them with a large collection of grammatical patterns, collocation and colligation of words and phrases to aid their analysis in a very short time. However the disadvantages are that corpora help in language learning and analyzing. Using corpora may, however, be a time-consuming task. The collections of texts in corpora may cause problems in analyses. Fig. 5. Example of Corpus-Based Approach C. Transfer Approach It is possible with this translation strategy to obtain fairly high quality translations. However, sometimes it is not possible to show all the word meaning properly or sometimes in this approach some words are missing in the output text. Fig. 6. Example of Transfer Approach D. Tense based translation with implemented methods In Table III, total twelve different sentences are translated from English to Bengali by our implemented methods. Each sentence represents a tense. Besides comparison with Google translator is also shown. Here bold indicates a wrong translation. So far our implemented systems (Direct, Transfer and Corpus) give</s>
<s>best result with all 12 tenses in compare to Google translator. TABLE III. TRANSLATION OF 12 TENSES FROM ENGLISH TO BENGALI AND COMPARE WITH GOOGLE TRANSLATOR Name of Tense English Sentence Accurate Bengali Sentence Direct Approach Transfer Approach Corpus Based Approach Google Translator Present I play football আিম ফুটবল খিল আিম ফুটবল খিল আিম ফুটবল খিল আিম ফুটবল খিল আিম ফুটবল খিল Present Continuous I am playing football আিম ফুটবল খলিছ আিম ফুটবল খলিছ আিম ফুটবল খলিছ আিম ফুটবল খলিছ আিম ফুটবল খলিছ Present perfect We have played Football আমরা ফুটবল খেলিছ আমরা ফুটবল খেলিছ আমরা ফুটবল খেলিছ আমরা ফুটবল খেলিছ আমরা ফুটবল খেলিছ Present perfect Continuous We have been playing football for 2 hours আমরা ২ ঘ া ধের ফুটবল খলেতিছ আমরা ২ ঘ া ধের ফুটবল খলেতিছ আমরা ২ ঘ া ধের ফুটবল খলেতিছ আমরা ২ ঘ া ধের ফুটবল খলেতিছ আমরা ২ ঘ া ধের ফুটবল খেলিছ Past You played football তিম ফুটবল খলেল তিম ফুটবল খলেল তিম ফুটবল খলেল তিম ফুটবল খলেল আপিন ফুটবল খলা Past Continuous You were playing football in the field তিম মােঠ ফুটবল খলিছেল তিম মােঠ ফুটবল খলিছেল তিম মােঠ ফুটবল খলিছেল তিম মােঠ ফুটবল খলিছেল আপিন ে র মেধ ফুটবল খলা িছল Past perfect He had played football in the field স মােঠ ফুটবল খেলিছল আিম মােঠ ফুটবল খেলিছলাম আিম মােঠ ফুটবল খেলিছলাম স মােঠ ফুটবল খলেব িতিন ে র মেধ ফুটবল খলা িছল Past perfect Continuous He had been playing football in the field for 2 hours স মােঠ ২ ঘ া ধের ফুটবল খিলেতিছল আিম মােঠ ২ ঘ া ধের ফুটবল খেলিছলাম আিম মােঠ ২ ঘ া ধের ফুটবল খেলিছলাম স ২ ঘ া ধের মােঠ ফুটবল খেলেতিছল িতিন ২ ঘ া সময় ে র ফুটবল খলিছেলন Future They will play football in the field তারা মােঠ ফুটবল খলেব তারা মােঠ ফুটবল খলেব তারা মােঠ ফুটবল খলেব তারা মােঠ ফুটবল খলেব তারা ে রফুটবল খলেব Future Continuous They will be playing football in the field for 3 hours তারা ৩ ঘ া ধের মােঠ ফুটবল খলেব তারা মােঠ ৩ ঘ া ধের ফুটবল খলেব তারা মােঠ ৩ ঘ া ধের ফুটবল খলেব তারা ৩ ঘ া ধের মােঠ ফুটবল খলেব তারা িতন ঘ ার জন মােঠ ফুটবল খলেব Future perfect They will have played football তারা ফুটবল খেল থাকেব তারা ফুটবল খেল থাকেব তারা ফুটবল খেল থাকেব তারা ফুটবল খেল থাকেব তারা ফুটবল খেলেছ Future perfect Continuous They will have been playing football for 3 hours তারা ৩ ঘ া ধের ফুটবল খলেত থাকেব তারা ৩ ঘ া ধের ফুটবল খলেত থাকেব তারা ৩ ঘ া ধের ফুটবল খলেত থাকেব তারা ৩ ঘ া ধের ফুটবল খলেত থাকেব তারা ৩ ঘ ার জনফুটবল খলেবন Bold indicates wrong translation IV. EXPERIMENTAL RESULT The program which is used for finding the accuracy rate compares between two files; One is the original file and the other is implemented output file in different approaches like (direct, transfer, corpus). Initially the program counts sentence and word number in the original file. Comparison is basically</s>
<s>done by word by word and sentence by sentence. Whether it finds any word mismatch then counts word mismatch and if it finds any sentence mismatch then counts sentence mismatch. Finally the program counts word accuracy and sentence accuracy rate by the following equations 2 and 3, = – ∗ % (2) = – ∗% (3) Total 1027 sentences and total 5379 words are applied on three different machine translation approaches: Direct approach, Corpus Based approach, Transfer approach and Google translate. We get different accuracy rates from these approaches. TABLE IV. ACCURACY RATE OF DIFFERENT APPROACH Direct Approach Corpus Based Approach Transfer Approach Google Translate Sentence Correct Rate (%) 68.4518 79.1626 77.9942 15.8715 Word Correct Rate (%) 80.8701 94.8689 91.4296 44.1718 From the Table IV, it is clear that sentence correct rate and word correct rate of Corpus approach is highest among all four methods and Google translate shows worst accuracy rate. TABLE V. WORD AND SENTENCE COUNT OF DIFFERENT APPROACH Different Method Total word Total sentence Word mismatch Line mismatch Direct Approach 5379 1027 1029 324 Corpus Based Approach 5379 1027 276 214 Transfer Approach 5379 1027 461 226 Google Translate 5379 1027 3003 864 On Table V, we can see that total applied word is 5379 in Direct approach and word mismatch compare to original file is 1029. Moreover, total sentence is 1027 and sentence mismatch to original file is 324. The lowest number of word mismatch can be found from Corpus Based approach which is 276. Besides the lowest number of line mismatch can be found from the same approach which is Corpus based and the number is 214. So from this evidence it can be said that Corpus approach shows more accuracy among all those approaches. A. Comparison with Related Work With the extensive survey, we have noticed that there has not been much work carried out on different tenses. To be specific, there is little work done on different tenses. In Tense Based English to Bengali Translation Using MT System [2] paper considers total 12 different forms of tenses whereas this paper also worked with total 12 tenses. Moreover, that paper considered total 50*12=600 sentences while this work considered 1027 sentences. However, that paper’s accuracy rate is less than this work. In paper [2], it has been observed that accuracy rates differ from tense to tense. For some tenses it is 100% but for some tenses it is 76%-90%. But our implemented work found 79.16% accuracy for all tenses by Corpus Based machine translation. B. Why Corpus Based method is best? There are two types of word files in this system. One is subject file and one is verb file. For each subject there is a flag correspond to the verb. For example, if the sentence is “I play” then the meaning of “play” is “ খিল”, on the other hand if the sentence is “You play” then the meaning of “play” is “ খল”. In Corpus based system it will take all the possible</s>
<s>meaning of verb at first. Then most suitable meaning will be selected for the final translation. For example, the intermediate meaning of “I play football” in Corpus based method will be “আিম ফুটবল খিল খল”. Then after final matching the final translation will be “আিম ফুটবল খিল”. That’s why Corpus based method gives the best result. V. CONCLUSION This paper investigated some methods of machine translation and implemented them. One of our implemented methods, which is Corpus based method provides better result in comparison with Google translator and other implemented methods. The aim of this work is to find out that best method which is turn out to be Corpus, so that more work can be done on that particular method in future and make its accuracy rate higher. As it is a complicated work thereby it requires more improvement on detecting multiple Bengali meaning for an English word and on improving the artificial intelligence to detect phrases and idioms. Identifying multiple meanings, Phrase and idioms along with developing a strong data dictionary will be addressed in future. Working with interrogative sentences will also be taken into consideration in future. REFERENCES [1] S. Ahmed, M. O. Rahman, S. R. Pir, M. A. Mottalib, and Md. S. Islam, “A New Approach towards the Development of English to Bengali Machine Translation System,” in International Conference on Computer Information and Technology (ICCIT) , pp. 360-364, Jahangirnagar University, Dhaka, Bangladesh, 2003. [2] Kanija Muntarina, Md. Golam Moazzam and Md. Al-Amin Bhuiyan October (2013) “Tense Based English to Bangla Translation Using MT System” in International Journal of Engineering Science Invention ISSN (Online): 2319 –734, ISSN (Print): 2319 –6726 www.ijesi.org Volume 2 Issue 10 ǁ PP.30-38. [3] S. A. Rahman, K. S. Mahmud, B. Roy, and K. M. A. Hasan, “English to Bengali Translation Using A New Natural Language Processing Algorithm,” in International Conference on Computer Information and Technology (ICCIT), pp. 294-298, Jahangirnagar University, Dhaka, Bangladesh, 2003. [4] S. Dasgupta, Abu Wasif, and S. Azam, “An Optimal Way of Machine Translation from English to Bengali” in ICCIT, 2004. [5] A. N. K. Zaman, Md. A. Razzaque, and A. K. M. K. Ahsan Talukder, “Morphological Analysis for English to Bengali Machine Aided Translation” in National Conference on Computer Processing of Bangla, Dhaka, Bangladesh, 2004. [6] S. K. Naskar, and S. Bandyopadhyay, “A Phrasal EBMT for Translation English to Bengali,” in MT Summit X, Kolkata, India, 2005. [7] Judith Francisca, Md. Mamun Mia, and Dr. S. M. Monzurur Rahman, “Adapting Rule Based Machine Translation From English to Bangla” in Indian Journal of Computer Science and Englineering (IJCSE), 2(3), pp.334-342, 2011. [8] Naushad UzZaman, Arnab Zaheen, and Mumit Khan, “A comprehensive Roman (English)-to-Bangla transliteration scheme.” 2006. [9] Masud Rabbani, Kazi Md Rokibul Alam, and Muzahidul Islam. "A new verb based approach for English to Bangla machine translation." In Informatics, Electronics & Vision (ICIEV), International Conference on, pp. 1-6. IEEE, 2014. [10] William John Hutchins, and Harold L. Somers, “An introduction to machine translation” in London: Academic Press, Vol. 362,</s>
<s>1992. [11] Andy Way, and Nano Gough, “Comparing Example-Based and Statistical Machine Translation” in Natural Language Engineering, Vol. 11(3),pp.295-309,2005 View publication statsView publication statshttps://www.researchgate.net/publication/323059451 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7</s>
<s>/CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif</s>
<s>/MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError</s>
<s>true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>One-Expression Classification in Bengali and its role in Bengali-English Machine TranslationOne-Expression Classification in Bengali and its role in Bengali-English Machine Translation Apurbalal Senapati CVPR Unit, Indian Statistical Institute Kolkata, India apurbalal.senapati@gmail.com Utpal Garain CVPR Unit, Indian Statistical Institute Kolkata, India utpal.garain@gmail.comAbstract— This paper attempts to analyze one-expressions in Bengali and shows its effectiveness for machine translation. The characteristics of one-expressions are studied in 177 million word corpus. A classification scheme has been proposed for the grouping the one-expressions. The features contributing towards the classification are identified and a CRF-based classifier is trained on an authors' generated annotated dataset containing 2006 instances of one-expressions. The classifier's performance is tested on a test set (containing 300 instances of Bengali one-expressions) which is different from the training data. Evaluation shows that the classifier can correctly classify the one-expressions in 75% cases. Finally, the utility of this classification task is investigated for machine translation (Bengali-English). The translation accuracy is improved from 39% (by Google translator) to 60% (by the proposed approach) and this improvement is found to be statistically significant. All the annotated datasets (there was none before) are made free to facilitate further research on this topic. Keywords- one-expressions; Bengali; corpus, machine translation I. INTRODUCTION One-expressions play important role in many areas of NLP, for instance, anaphora resolution, question-answering, machine translation, etc. Consider the following sentences in Bengali: S1: e e | [ek samay sekhAne ek rAjA chilen/Once upon a time, there was a king.] There are two one-expressions (both are e /ek) in the Bengali sentence, S1. While translating this sentence into English, the first one-expression is translated to "once" and the second one-expression is translated to "a". There are instances when the same one-expression (e.g. e /ek) is used in an inflected form and is translated to the number "one". For example, S2: e o i| [bAzAre ektAo lok nei/there is no one in the market.] In this sentence, the one-expression, e o (an inflected form of e /ek) is translated to "one". Sometimes, the one-expression is not translated at all. For example, consider this sentence, S3: o e | (rAm o shyAm ke ek kore dekha thik noi/It is not right to treat Ram and Shyam similarly). In this sentence the one-expression e /ek(one) has not been translated at all. The above discussion shows that the same one-expression behaves differently in different context. Therefore, its translation in the target language varies depending upon its particular type (or class). Hence, the classification of one-expressions is an important task to understand their behavior and subsequently to determine their translation. This work aims at doing this task for Bengali. II. PREVIOUS STUDIES The computational analysis of one-expression in Bengali has not been explored before. Even there is hardly any linguistic study on classification of one-expressions in Bengali. Computational hardship refers to unavailability of annotated data sets (making one-expressions in sentences and then tagging them with their respective class). Research on machine translation of Indic languages into other language(s) is also gaining importance in</s>
<s>the recent times1 and therefore, one-expressions still have not got a chance to be looked upon. However, statistics show that in Indic languages like Bengali, one-expressions are used often. A study on 177-million-word FIRE Bengali Corpus2 shows that about 1.34-million words refer to one-expressions. Obviously, they demand additional processing effort for machine translation. This finding conforms to the observation of Hwee Tou Ng et al. [1] who showed the statistical measure of the word one in the 100-million-word British National Corpus (BNC) and claimed that one cannot just ignore one in any NLP application. In case of English, one-expressions have been studied while dealing with one-anaphora [2, 3, 4, 5]. Halliday and Hasan [2], Dahl [4] and Luperfoy [5] identify major criteria that distinguish the non-anaphoric uses of one from each other. Hwee Tou Ng et al. [1] classified the uses of one into six classes: Numeric (John has one blue T-shirt), Partitive (A special exhibition of books for children forms one of the centrepieces), Anaphoric (Would you like this book? Yes, I would like that one), Generic (One must think a little deeper to discover the underlying social roots of the problem), Idiomatic (It would be perfect to have a loved one accompany me in the whole trip), and Unclassifiable (Cursed be one who curses you). Out of these classes, they concentrate on the anaphoric class and used a machine learning approach to identification and resolve of one-anaphora. In our study, we follow the classification scheme of Hwee Tou Ng et al. [1] for classifying the Bengali one-expressions with some extension. III. OUR CONTRIBUTION The distinct contributions of this work refer to (i) an exhaustive study of Bengali one-expressions (from a 177 1 A nation-wide consortium for machine translation of Indic languages is being funded by the Ministry of Information Technology, Govt. of India, http://www.tdil-dc.in. 2 FIRE: Forum for Information Retrieval Evaluation; http://www.isical.ac.in/~fire/data.html 162978-1-4799-5330-1/14/$31.00 c©2014 IEEEmillion-word corpus) and their classification, (ii) preparation of two annotated datasets (details is in section V): the first one containing 1806 sentences consisting of 2006 instances of one-expressions and the second one containing 296 sentences consisting of 300 instances of one-expressions. Each one-expression in these datasets is tagged with their respective class, (iii) study of the features contributing significantly for classification of one-expressions and then developing a CRF-based classifier for automatic classification of the one-expressions. One (the bigger one) of the annotated datasets is used to train the CRF-based classifier which is tested on the second dataset; (iv) demonstration of the utility of this work in the context of Bengali-English machine translation. IV. FREQUENCY OF ONE-EXPRESSIONS AND THEIR CLASSIFICATIONS IN BENGALI One expression in Bengali is more complex compared to English like language. Since Bengali is highly agglutinative language, most of the words are highly inflected. In our experiment, we have identified twenty one commonly used such forms of one-expressions [e /ek, e /ekta (ek with -ta classifier), e /ekti (ek with -ti classifier), e i/ektai (ek with -tai inflection), e /ektar (ek with</s>
<s>-tar inflection), …]. The use of one-expression is quite frequently in Bengali. We investigated the frequency of one-expressions in the 177-million-word FIRE Bengali Corpus [6] and found that about 0.76% words (about 1.34 million words) in the corpus are one-expressions. This counts all morphological variations of one-expressions. Frequency of one-expression clearly shows the dominant presence of one-expressions in Bengali. As a reference one may note that the words " "/na (no) and " "/kare (do) are the two most frequent words in FIRE corpus and their occurrence frequencies are 0.66% and 0.60%, respectively. Classification of one-expressions is based on the instances found in the FIRE corpus. We follow the classification scheme of Hwee Tou Ng et al. [1] with one exception. Instead of six classes we found seven dominant classes among the Bengali one-expressions. The Equality class (explained next) which is not relevant for English it found to be quite in use for Bengali. The seven classes are explained as follows: A. Idiomatic one (IDO) In Bengali it acts like a particle and generally associated with definite or indefinite singularity of any entity. Functionally it is very similar to the indefinite/definite article “a/an, the” in English. Example: e a d [e ] (ekoda ajyodhay [ek] rAjA chhilo)/once upon a time there was [a] king at Ajyodha. B. Numeric one (NUM) It indicates the numeric (cardinal) value one. Example: t [e ] (AmAr kAchhe mAtro [ek] tAkA Achhe)/I have only [one] rupee. C. Partitive one (PAT) Selects an individual from a group of object. Example: o [e ] n i oi � c ([kono ek] jhuprite rAnnAr samayei oi Agun lAge bale sandeha karA hachhe)/It is suspected that, the fire broke out in any [one] of the huts from cooking oven. D. Anaphoric one (ANA) The one having a referent. Example: o o , [e ] (or duto walkman Achhe , [ektA] aami niye nebo)/He has two walkmans, I will take [one]. E. Equality one (EQU) Used of this one is for equality for two or more entities. Example: i [e ] (santrashbAdider sange gotA islami duniyAke [ek] kore dekhA thik noi) / It is injustice to treat the terrorism and the entire Islamic community [equally]. Note: One interesting property in Bengali the frequent use of the word e i/ek-i/same whose root form is e /ek/one. But the expression e i (ek-i) does not come under one-expression. F. Generic one (GEN) A pronominal use that refers to a generic entity. Example: p a , u [e ] e (prAthamik bhAbe pulicer onumAn, sabhAy hAjir keo [ek] jan bomAti sange niye esechhilo)/Primarily the police suspects that some [one] attending the meeting carried the bomb. G. Other one (OTH) The one-expression other than above six classes. Example: [e ] , i s ([ek kothai], rAjnitir ghurnAbarte pariA bAnglA Aj nAnA dikei paryudasta)/[In brief], Bengal, in many aspects, is now in a disastrous condition due to its political practices. V. PREPARATION OF ANNOTATED DATA From the FIRE corpus, we randomly selected 1806 sentences containing</s>
<s>2006 one-expression and manually annotated these with one of the seven classes described above. The distribution of each class in the annotated corpus is shown in the TABLE I. We call this annotated dataset ℑr as this has been used to train a CRF-based classifier as explained in the next section. It is noted that this distribution one-expression is differs from other languages. For example, the experiment was conducted in [1] found Numeric class (46.9%) as the most frequent one followed by Partitive (25.3%). Idiomatic (1.6%) was seen to be very less frequent in their dataset of 1,577 one-expressions randomly selected from the BNC corpus. TABLE I. DISTRIBUTION OF ONE-EXPRESSION IN THE ANNOTATED DATA SET Class Frequency Percentage % Idiomatic 544 27.12 Partitive 415 20.69 Numeric 362 18.05 Generic 266 13.26 Equality 114 5.68 Anaphoric 98 4.88 Other 207 10.32 Total 2006 100 2014 International Conference on Asian Language Processing (IALP) 163VI. AUTOMATIC CLASSIFICATIONS OF BENGALI ONE-EXPRESSION We configured a CRF-based classifier for automatic classification of Bengali one-expressions. In our experiment, we have configured the Java-based an open-source package MAchine Learning for LanguagE Toolkit (MALLET) 3 . A set of seven features that contribute significantly for classifying the one-expressions is identified with the help of linguists. Description of these seven features is given below: • POS tag of one (W0): We have considered the POS of the one as a feature. In our experiment we have found the POS of one-expression is either QC (cardinal) or NN (common noun). [For POS tagging we have used a previously developed Bengali POS tagger which is basically obtained by retraining the Stanford tagger on about 10K tagged Bengali sentences. The Bengali tagger is found to be 92% accurate]. • Inflections (classifier) of one: The inflections (classifiers) of one are considered as feature. We have twenty-one such inflections (and classifiers) {-ta, -ti, -tai, -tir, -tite, …}. • Previous word (W-1) of one: The immediate previous word of one. • Next word (W+1) of one: The immediate next word of one. • Sentence starts with one: Whether the one is the starting word of the sentence. • Sentence ends with one: Whether the one is the ending word of the sentence. • Measuring unit followed by one: Whether the next word of one is measuring unit (hAzAr/thousand, keji/kilogram, …). VII. TRAINING DATA The annotated dataset ℑr is used for training the CRF. The sentences in ℑr are POS tagged and the one-expressions are tagged with their respective class labels. The annotated dataset is presented in a column format as shown in TABLE II and TABLE III shows the detailed description of data format. The CRF-based classifier uses maximum likelihood for training, for feature expectations uses the forward backward algorithm and uses a Gaussian prior on parameter optimization. TABLE II. THE TRAINING DATA FORMAT ....................................................................... txt1.txt 0 11 QC o txt1.txt 1 �m NN o txt1.txt 2 e QC NUM txt1.txt 3 NN o txt1.txt 4 i NST o txt1.txt 5 NN o txt1.txt 6 p</s>
<s>NN o ......................................................................... TABLE III. DESCRIPTION OF TRAINING DATA Column Type Description 3 http://mallet.cs.umass.edu/sequences.php 1 Document Id Contains the filename 2 Word number Word index in the sentence 3 Word Word itself 4 POS POS of the word 5 Classification Classification tag VIII. EVALUATION The classification system has been evaluated by the publicly available ICON 2011 data set [7] which was prepared primarily for Bengali anaphora resolution. This dataset consists of nine text pieces and we have extended this dataset by adding four more texts. This combined dataset (ℑe) has been previously used for the evaluation of Bengali anaphora resolution systems [8, 9]. Choice of this dataset is somewhat intentional as this dataset has been annotated for anaphora resolution. As one-anaphor is one of the one-expression classes, annotation with one-expression information would help subsequent research on resolution of one-anaphors. The data in ℑe is presented in the same format as shown in TABLE II. TABLE IV shows the coverage of ℑe in terms of number of text pieces, words and one-expressions. TABLE IV. COVERAGE OF TEST DATA SET, ℑE Data Test data #texts 13 #words 27454 #one-expressions 300 TABLE V. EVALUATION OF ONE EXPRESSION CLASSIFICATION sifsifisiIDO 103 85 25 .83 .77 .80 NUM 84 77 37 .92 .68 .78 OTH 43 29 3 .67 .91 .77 PAT 32 12 0 .38 1.0 .55 GEN 17 7 1 .41 .88 .56 ANA 16 13 8 .81 .62 .70 EQU 5 3 0 .60 1.0 .75 Total 300 226 74 .75 .75 .75 TABLE V gives the results for the one-expression classification for each of the seven classes. The average accuracy of one-expression classification is about 75% where as the accuracy of Idiomatic (80%), Numeric (78%) and Partitive (77%) are relatively better. As far as recall and precisions are concerned, the NUM class shows the highest recall and the PAT class shows the highest precision. The most dominant class, i.e. IDO shows the highest F1-score. IX. ERROR ANALYSIS Most of the errors occur due to inter-class confusion. TABLE VI shows the confusion matrix that shows that IDO and NUM are two classes which create major confusions. For all other classes, a dominant tendency is to be confused with either IDO or NUM classes. This is because, some features (classifier/inflections, e.g., -ta/-ti; POS tags QC/cardinal, etc.) are strongly favourable for Idiomatic and Numeric classes. Many instances of PAT 164 2014 International Conference on Asian Language Processing (IALP)class are also confused but such confusions are spread over three different classes, confusion with ANA being the most significant. This is because the features for Partitive (PAT) class are much closed to the features of Anaphoric (ANA) class. In fact, some instances of Partitive class are special kind of anaphoric one-expressions. TABLE VI. CONFUSION MATRIX FOR CLASSIFICATION OF ONE-EXPRESSION IDO NUM PAT ANA EQU GEN OTH IDO × 17 0 0 0 0 1 NUM 6 × 0 0 0 0 1 PAT 5 6 × 8 0 1 0 ANA 1 1 0 × 0</s>
<s>0 1 EQU 2 0 0 0 × 0 0 GEN 6 4 0 0 0 × 0 OTH 5 9 0 0 0 0 × X. EFFECT ON MACHINE TRANSLATION In the beginning of our discussion we show that as the one-expressions behave differently in different context, production of their right translation (for example, in English the Bengali one-expressions can be translated to a / an / the / one / only one/ someone/ once / equally / similarly /etc.) is a challenging task. We hypothesize that classification of one-expressions would help in producing the right translation. We tested our hypothesis in the following way. The dataset ℑe contains 300 instances of one-expressions. These instances occur 296 sentences. These 296 sentences are translated to English using Google Bengali-English translator4 and the translations of 300 one-expressions are marked. The proper English translations of these 300 one-expressions (in context of their containing sentences) are done manually. It is found that the Google translator could produce correct translations for 117 (39%) one-expressions. Note that we are concerned about the translation of the one-expressions only. Next, we associate the most dominant English translation to each of the six classes of one-expressions (for the OTH class, we could not associate any translation). For example, the most dominant translation of the IDO class is a/an, of the NUM class is one, etc. Note that the one-expressions under a particular class can have several English translations but for the sake of simplicity we consider just one of them, the most dominant translation. Once a Bengali one-expression is classified, its English translation is the word associated with the respective class. By doing so, we could produce translation of 257 one-expressions (for 43 OTH class instances, we could not produce any translation). It is found that out of these 257 translations, 179 (60%) are correct. The accuracies originate from two sources: (i) classification errors (note that we can classify with about 75% accuracy) and (ii) even if the classification is correct, the most dominant translation (a static one which does not consider any context) does not always give the correct translation. Even this simple framework improves the translation accuracies from 39% (by Google translator that considers context) to 60% (by simply classifying the one-expressions and replacing them by their class-specific dominant (static) 4 Google translator: http://translate.google.co.in/#en/bn/ translation). This improvement is found to be statistically significant (p-value < 0.01 in a 2-tailed paired t-test). XI. CONCLUSIONS This work strongly supports to have an additional effort for processing one-expressions for Bengali NLP. Future works refer to further investigation of the features used for classification and class-wise better translation strategy. We have generated annotated datasets of 2246 sentences (combining ℑr and ℑe) containing 2306 instances of one-expressions. We make this datasets free for facilitating further research on this area and at the same time the datasets need to be enlarged in order to design more robust machine learning based classification scheme. The present CRF-based classifier gives about 75% accuracy and one</s>
<s>of the major reasons behind its inaccuracies is the small size of the training data. We also plan to extend the present research for resolution of one-anaphors. For this work, we plan to classify the one-expressions as one-anaphor or not (two-class problem) and then resolve the one-anaphors by finding their correct antecedents. REFERENCES [1] H. Tou Ng, Yu Zhou, Robert Dale and Mary Gardiner (2005). Machine Learning Approach to Identification and Resolution of One-Anaphora. [2] M. A. K. Halliday and R. Hasan (1976). Cohesion in English. Longman [3] B. Webber (1979). A Formal Approach to Discourse Anaphora. Garland Publishing Inc. [4] D. A. Dahl (1985). The structure and function of one anaphora in English. PhD thesis, Univ. of Minnesota. [5] S. Luperfoy (1991). Discourse Pegs: A Computational Analysis of Context-Dependent Referring Expressions. PhD thesis, Univ. of Texas at Austin [6] Forum for Information Retrieval Evaluation; http://www.isical.ac.in/~fire/data.html [7] ICON NLP Tools Contest (2011). Anaphora Resolution in Indian Languages," In 9th Int. Conf. on Natural Language Processing (ICON), Chennai, India. [8] A. Senapati and U.Garain (2012). Anaphora Resolution in Bangla using global discourse knowledge. In Int. Conf. of Asian Language Processing (IALP), 49-52, Hanoi, Vietnam. [9] A. Senapati and U. Garain (2013). GuiTAR-based Pronominal Anaphora Resolution in Bengali, in ACL, 126-130, Sofia, Bulgari 2014 International Conference on Asian Language Processing (IALP) 165 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (None) /CalCMYKProfile (None) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages false /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages false /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages false /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None</s>
<s>/PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF0633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F006200650020005000440046002006450646062706330628062900200644063906310636002006480637062806270639062900200648062B06270626064200200627064406230639064506270644002E00200020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644062A064A0020062A0645002006250646063406270626064706270020062806270633062A062E062F062706450020004100630072006F00620061007400200648002000410064006F00620065002000520065006100640065007200200036002E00300020064806450627002006280639062F0647002E> /BGR <FEFF04180437043F043E043B043704320430043904420435002004420435043704380020043D0430044104420440043E0439043A0438002C00200437043000200434043000200441044A0437043404300432043004420435002000410064006F00620065002000500044004600200434043E043A0443043C0435043D04420438002C0020043F043E04340445043E0434044F044904380020043704300020043D04300434043504360434043D043E00200440043004370433043B0435043604340430043D0435002004380020043F04350447043004420430043D04350020043D04300020043104380437043D0435044100200434043E043A0443043C0435043D04420438002E00200421044A04370434043004340435043D043804420435002000500044004600200434043E043A0443043C0435043D044204380020043C043E0433043004420020043404300020044104350020043E0442043204300440044F0442002004410020004100630072006F00620061007400200438002000410064006F00620065002000520065006100640065007200200036002E0030002004380020043F043E002D043D043E043204380020043204350440044104380438002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200036002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200036002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF0054006f0074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd006300680020006b0065002000730070006f006c00650068006c0069007600e9006d0075002000700072006f0068006c00ed017e0065006e00ed002000610020007400690073006b00750020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e0074007900200050004400460020006c007a00650020006f007400650076015900ed007400200076002000610070006c0069006b0061006300ed006300680020004100630072006f006200610074002000610020004100630072006f006200610074002000520065006100640065007200200036002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200036002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200036002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200036002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e400740074006500690064002c0020006500740020006c0075007500610020005000440046002d0064006f006b0075006d0065006e00740065002c0020006d0069007300200073006f00620069007600610064002000e4007200690064006f006b0075006d0065006e00740069006400650020007500730061006c006400750073007600e400e4007200730065006b0073002000760061006100740061006d006900730065006b00730020006a00610020007000720069006e00740069006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e0074006500200073006100610062002000610076006100640061002000760061006900640020004100630072006f0062006100740020006a0061002000410064006f00620065002000520065006100640065007200200036002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200036002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03A703C103B703C303B903BC03BF03C003BF03B903AE03C303C403B5002003B103C503C403AD03C2002003C403B903C2002003C103C503B803BC03AF03C303B503B903C2002003B303B903B1002003BD03B1002003B403B703BC03B903BF03C503C103B303AE03C303B503C403B5002003AD03B303B303C103B103C603B1002000410064006F006200650020005000440046002003BA03B103C403AC03BB03BB03B703BB03B1002003B303B903B1002003B103BE03B903CC03C003B903C303C403B7002003C003C103BF03B203BF03BB03AE002003BA03B103B9002003B503BA03C403CD03C003C903C303B7002003B503C003B103B303B303B503BB03BC03B103C403B903BA03CE03BD002003B503B303B303C103AC03C603C903BD002E0020002003A403B1002003AD03B303B303C103B103C603B10020005000440046002003C003BF03C5002003B803B1002003B403B703BC03B903BF03C503C103B303B703B803BF03CD03BD002003B103BD03BF03AF03B303BF03C503BD002003BC03B50020004100630072006F006200610074002003BA03B103B9002000410064006F00620065002000520065006100640065007200200036002E0030002003BA03B103B9002003BD03B503CC03C403B503C103B503C2002003B503BA03B403CC03C303B503B903C2002E> /HEB <FEFF05D405E905EA05DE05E905D5002005D105E705D105D905E205D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05EA05D005D905DE05D905DD002005DC05EA05E605D505D205D4002005D505DC05D405D305E405E105D4002005D005DE05D905E005D505EA002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E0020002005E005D905EA05DF002005DC05E405EA05D505D7002005E705D505D105E605D90020005000440046002005D1002D0020004100630072006F006200610074002005D505D1002D002000410064006F006200650020005200650061006400650072002005DE05D205E805E105D400200036002E0030002005D505DE05E205DC05D4002E> /HRV <FEFF004F0076006500200070006F0073007400610076006B00650020006B006F00720069007300740069007400650020006B0061006B006F0020006200690073007400650020007300740076006F00720069006C0069002000410064006F00620065002000500044004600200064006F006B0075006D0065006E007400650020006B006F006A00690020007300750020007000720069006B006C00610064006E00690020007A006100200070006F0075007A00640061006E00200070007200650067006C006500640020006900200069007300700069007300200070006F0073006C006F0076006E0069006800200064006F006B0075006D0065006E006100740061002E0020005300740076006F00720065006E0069002000500044004600200064006F006B0075006D0065006E007400690020006D006F006700750020007300650020006F00740076006F007200690074006900200075002000700072006F006700720061006D0069006D00610020004100630072006F00620061007400200069002000410064006F00620065002000520065006100640065007200200036002E0030002000690020006E006F00760069006A0069006D0020007600650072007A0069006A0061006D0061002E> /HUN <FEFF0045007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c002000fc007a006c00650074006900200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d00650067006a0065006c0065006e00ed007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200061006c006b0061006c006d00610073002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740075006d006f006b006100740020006b00e90073007a00ed0074006800650074002e002000200041007a002000ed006700790020006c00e90074007200650068006f007a006f007400740020005000440046002d0064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200036002c0030002d0073002000e900730020006b00e9007301510062006200690020007600650072007a006900f3006900760061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 6.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200036002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200036002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d0069002000730075006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c002000740069006e006b0061006d0075007300200076006500720073006c006f00200064006f006b0075006d0065006e00740061006d00730020006b006f006b0079006200690161006b006100690020007000650072017e0069016b007201170074006900200069007200200073007000610075007300640069006e00740069002e002000530075006b00750072007400750073002000500044004600200064006f006b0075006d0065006e007400750073002000670061006c0069006d006100200061007400690064006100720079007400690020007300750020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200036002e00300020006200650069002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF004c006900650074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200069007a0076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020007000690065006d01130072006f00740069002000640072006f01610061006900200075007a01460113006d0075006d006100200064006f006b0075006d0065006e0074007500200073006b00610074012b01610061006e0061006900200075006e0020006400720075006b010101610061006e00610069002e00200049007a0076006500690064006f0074006f0073002000500044004600200064006f006b0075006d0065006e00740075007300200076006100720020006100740076011300720074002c00200069007a006d0061006e0074006f006a006f0074002000700072006f006700720061006d006d00750020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200036002e003000200076006100690020006a00610075006e0101006b0075002000760065007200730069006a0075002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 6.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200036002e003000200065006c006c00650072002e> /POL <FEFF004b006f0072007a0079007300740061006a010500630020007a00200074007900630068002000750073007400610077006900650144002c0020006d006f017c006e0061002000740077006f0072007a0079010700200064006f006b0075006d0065006e00740079002000410064006f00620065002000500044004600200070006f007a00770061006c0061006a01050063006500200077002000730070006f007300f300620020006e00690065007a00610077006f0064006e0079002000770079015b0077006900650074006c00610107002000690020006400720075006b006f00770061010700200064006f006b0075006d0065006e007400790020006600690072006d006f00770065002e00200020005500740077006f0072007a006f006e006500200064006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d0061006300680020004100630072006f00620061007400200069002000410064006f0062006500200052006500610064006500720020007700200077006500720073006a006900200036002e00300020006f00720061007a002000770020006e006f00770073007a00790063006800200077006500720073006a00610063006800200074007900630068002000700072006f006700720061006d00f30077002e004b006f0072007a0079007300740061006a010500630020007a00200074007900630068002000750073007400610077006900650144002c0020006d006f017c006e0061002000740077006f0072007a0079010700200064006f006b0075006d0065006e00740079002000410064006f00620065002000500044004600200070006f007a00770061006c0061006a01050063006500200077002000730070006f007300f300620020006e00690065007a00610077006f0064006e0079002000770079015b0077006900650074006c00610107002000690020006400720075006b006f00770061010700200064006f006b0075006d0065006e007400790020006600690072006d006f00770065002e00200020005500740077006f0072007a006f006e006500200064006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d0061006300680020004100630072006f00620061007400200069002000410064006f0062006500200052006500610064006500720020007700200077006500720073006a006900200036002e00300020006f00720061007a002000770020006e006f00770073007a00790063006800200077006500720073006a00610063006800200074007900630068002000700072006f006700720061006d00f30077002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200036002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006C0069007A00610163006900200061006300650073007400650020007300650074010300720069002000700065006E007400720075002000610020006300720065006100200064006F00630075006D0065006E00740065002000410064006F006200650020005000440046002000610064006500630076006100740065002000700065006E007400720075002000760069007A00750061006C0069007A006100720065002000640065002000EE006E00630072006500640065007200650020015F0069002000700065006E00740072007500200069006D007000720069006D006100720065006100200064006F00630075006D0065006E00740065006C006F007200200064006500200061006600610063006500720069002E00200044006F00630075006D0065006E00740065006C00650020005000440046002000630072006500610074006500200070006F00740020006600690020006400650073006300680069007300650020006300750020004100630072006F0062006100740020015F0069002000410064006F00620065002000520065006100640065007200200036002E003000200073006100750020007600650072007300690075006E006900200075006C0074006500720069006F006100720065002E> /RUS <FEFF04180441043F043E043B044C043704430439044204350020044D044204380020043F043004400430043C043504420440044B0020043F0440043800200441043E043704340430043D0438043800200434043E043A0443043C0435043D0442043E0432002000410064006F006200650020005000440046002C0020043F043E04340445043E0434044F04490438044500200434043B044F0020043D0430043404350436043D043E0433043E0020043F0440043E0441043C043E044204400430002004380020043F043504470430044204380020043104380437043D04350441002D0434043E043A0443043C0435043D0442043E0432002E00200421043E043704340430043D043D044B043500200434043E043A0443043C0435043D0442044B00200050004400460020043C043E0436043D043E0020043E0442043A0440044B0442044C002C002004380441043F043E043B044C04370443044F0020004100630072006F00620061007400200438002000410064006F00620065002000520065006100640065007200200036002E00300020043B04380431043E00200438044500200431043E043B043504350020043F043E04370434043D043804350020043204350440044104380438002E> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200073006c00fa017e006900610020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f007600200076006f00200066006f0072006d00e100740065002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300fa002000760068006f0064006e00e90020006e0061002000730070006f013e00610068006c0069007600e90020007a006f006200720061007a006f00760061006e006900650020006100200074006c0061010d0020006f006200630068006f0064006e00fd0063006800200064006f006b0075006d0065006e0074006f0076002e002000200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e0074007900200076006f00200066006f0072006d00e10074006500200050004400460020006a00650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d00650020004100630072006f0062006100740020006100200076002000700072006f006700720061006d0065002000410064006f006200650020005200650061006400650072002c0020007600650072007a0069006900200036002e003000200061006c00650062006f0020006e006f007601610065006a002e> /SLV <FEFF005400650020006E006100730074006100760069007400760065002000750070006F0072006100620069007400650020007A00610020007500730074007600610072006A0061006E006A006500200064006F006B0075006D0065006E0074006F0076002000410064006F006200650020005000440046002C0020007000720069006D00650072006E006900680020007A00610020007A0061006E00650073006C006A006900760020006F0067006C0065006400200069006E0020007400690073006B0061006E006A006500200070006F0073006C006F0076006E0069006800200064006F006B0075006D0065006E0074006F0076002E0020005500730074007600610072006A0065006E006500200064006F006B0075006D0065006E0074006500200050004400460020006A00650020006D006F0067006F010D00650020006F00640070007200650074006900200073002000700072006F006700720061006D006F006D00610020004100630072006F00620061007400200069006E002000410064006F00620065002000520065006100640065007200200036002E003000200074006500720020006E006F00760065006A01610069006D0069002E> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200036002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200036002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF0130015f006c006500200069006c00670069006c0069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900e70069006d006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069006e0065002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e0020004f006c0075015f0074007500720075006c0061006e002000500044004600200064006f007300790061006c0061007201310020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200036002e003000200076006500200073006f006e00720061006b00690020007300fc007200fc006d006c0065007200690079006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043A043E0440043804410442043E043204430439044204350020044604560020043F043004400430043C043504420440043800200434043B044F0020044104420432043E04400435043D043D044F00200434043E043A0443043C0435043D044204560432002000410064006F006200650020005000440046002C0020043F044004380437043D043004470435043D0438044500200434043B044F0020043D0430043404560439043D043E0433043E0020043F0435044004350433043B044F04340443002004560020043404400443043A0443002004340456043B043E04320438044500200434043E043A0443043C0435043D044204560432002E0020042104420432043E04400435043D04560020005000440046002D0434043E043A0443043C0435043D044204380020043C043E0436043D04300020043204560434043A04400438043204300442043800200437043000200434043E043F043E043C043E0433043E044E0020043F0440043E043304400430043C04380020004100630072006F00620061007400200456002000410064006F00620065002000520065006100640065007200200036002E00300020044204300020043F04560437043D04560448043804450020043204350440044104560439002E> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 6.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>M.Sc. Engg. ThesisTowards Achieving A Delicate Blending betweenRule-based Translator and Neural Machine Translator forBengali to English TranslationMd. Adnanul Islam (0416052015F)Submitted toDepartment of Computer Science and Engineering(In partial fulfilment of the requirements for the degree ofMaster of Science in Computer Science and Engineering)Department of Computer Science and EngineeringBangladesh University of Engineering and Technology (BUET)Dhaka 1000November 6, 2019Dedicated to my loving parentsAuthor’s ContactMd. Adnanul IslamHouse-516, Road-2, Block-I,Bashundhara R/A,DhakaEmail: islamadnan2265@gmail.comThe thesis titled “Towards Achieving A Delicate Blending between Rule-based Translatorand Neural Machine Translator for Bengali to English Translation”, submitted by Md. AdnanulIslam, Roll No. 0416052015F, Session April 2016, to the Department of Computer Science andEngineering, Bangladesh University of Engineering and Technology, has been accepted as satisfactoryin partial fulfilment of the requirements for the degree of Master of Science in Computer Science andEngineering and approved as to its style and contents. Examination held on November 6, 2019.Board of ExaminersDr. A. B. M. Alim Al Islam ChairmanProfessor (Supervisor)Department of Computer Science and EngineeringBangladesh University of Engineering and Technology, Dhaka.Dr. Md. Mostofa Akbar MemberHead and Professor (Ex-Officio)Department of Computer Science and EngineeringBangladesh University of Engineering and Technology, Dhaka.Dr. Mahmuda Naznin MemberProfessorDepartment of Computer Science and EngineeringBangladesh University of Engineering and Technology, Dhaka.Dr. Muhammad Abdullah Adnan MemberAssistant ProfessorDepartment of Computer Science and EngineeringBangladesh University of Engineering and Technology, Dhaka.Dr. Md. Mahbubur Rahman MemberProfessor (External)Department of Computer Science and EngineeringMilitary Institute of Science and Technology, Dhaka.Candidate’s DeclarationThis is hereby declared that the work titled “Towards Achieving A Delicate Blending between Rule-based Translator and Neural Machine Translator for Bengali to English Translation”, is the outcomeof research carried out by me under the supervision of Dr. A. B. M. Alim Al Islam in the Departmentof Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka1000. It is also declared that this thesis or any part of it has not been submitted elsewhere for theaward of any degree or diploma.AdnanMd. Adnanul IslamCandidateAcknowledgmentForemost, I express my heart-felt gratitude to my supervisor, Dr. A. B. M. Alim Al Islam, for hisconstant supervision of this work. He helped me a lot in every aspect of this work and guided mewith proper directions whenever I sought one. His patient hearing of my ideas, critical analysis ofmy observations, detecting flaws (and amending thereby) in my thinking, and writing have made thisthesis a success.I would also want to thank the respected members of my thesis committee: Dr. Md. MostofaAkbar, Dr. Mahmuda Naznin, Dr. Muhammad Abdullah Adnan, and the external member Dr. Md.Mahbubur Rahman, for their encouragement, insightful comments, and valuable suggestions.I am also thankful to Md. Saidul Hoque Anik (Lecturer, CSE, MIST). I sought help from him anumber of occasions regarding simulation setup and performance evaluation of this thesis. Besides,I am grateful to Dr. Rifat Shahriyar (Associate Professor, CSE, BUET), Abhik Bhattacharjee, andTahmid Hasan (Lecturer, CSE, BUET) for their kind support during my experimentation with a largedataset. In addition, I am grateful to Dr. Swakkhar Swatabta (Associate Professor, CSE, UIU) andNovia Nurain (Ph.D. student in CSE, BUET and Assistant Professor, CSE, UIU) for their help andvaluable</s>
<s>suggestions regarding the writing and presentation of this thesis.Last but not the least, I remain ever grateful to my beloved parents, who always exist as sourcesof inspiration behind every success of mine.AbstractAlthough, a number of research studies have been done on natural language processing (NLP) indifferent areas such as Example-based Machine Translation (EBMT), Phrase-based Machine Trans-lation, etc., for different pairs of languages such as English to Bengali translator, very few researchstudies have been done on Bengali to English translation. Popular and widely available translatorssuch as Google translator performs reasonably well when translating among the popular languagessuch as English, French, or Spanish; however, they make elementary mistakes when translating thelanguages that are newly introduced to the system such as Bengali, Arabic, etc.Google Translator uses Neural Machine Translation (NMT) approach with Recurrent Neural Net-work (RNN) to build its multilingual translation system. Prior to NMT, Google Translator usedStatistical Machine Translation (SMT) approach. However, these approaches solely depend on theavailability of a large parallel corpus of the translating language pairs. As a result, most of the researchstudies found so far on NLP have been performed keeping English as the base or source language.Here, a good number of widely spoken potential languages still remain nearly unexplored. Bengali,the eighth one in terms of usage all over the world, represents one of the prominent examples amongthose languages. Therefore, in this study, we explore improvized translation from Bengali to English.To do so, we study both the rule-based translator and the data-driven machine translators (NMT andSMT) in isolation, and in combination with different approaches of blending between them. Morespecifically, first, we implement some basic grammatical rules along with identification of names assubjects and optimization of Bengali verbs in our rule-based translator. Next, we integrate our rule-based translator with each of the data-driven machine translators (NMT and SMT) separately usingdifferent approaches. Besides, We perform rigorous experimentation over different datasets to reveala comparison among the different approaches in terms of translation accuracy, time complexity, andspace complexity.ContentsBoard of Examiners iiCandidate’s Declaration ivAcknowledgment vAbstract vi1 Introduction 11.1 Motivation behind Our Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Approach of Our Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Background and Related Work 52.1 Existing Research Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Google’s Neural Machine Translation (GNMT)</s>
<s>Model . . . . . . . . . . . . . . . . . . 92.2.1 Background of GNMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.2.1 Embedding Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.2.2 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.2.3 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.2.4 Projection Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.2.5 Inference: Generating Translations . . . . . . . . . . . . . . . . . . . . 122.3 Limitations of Existing Research Studies . . . . . . . . . . . . . . . . . . . . . . . . . . 14vii3 Proposed Methodology 153.1 Rule-based Translator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.1 Step-1: Input of Bengali Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.2 Step-2: Analysis of Sentence Structure and Tokenization . . . . . . . . . . . . . 173.1.3 Step-3: Token Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.1.4 Step-4: Word-by-word Translation . . . . . . . . . . . . . . . . . . . . . . . . . 193.1.5 Step-5: Apply Necessary Words and Suffixes . . . . . . . . . . . . . . . . . . . 193.1.6 Step-6: Rearrange Words by Applying Grammatical Rules . . . . . . . . . . . . 213.2 Verb Identification</s>
<s>and Memory Optimization . . . . . . . . . . . . . . . . . . . . . . . 233.2.1 Approach 1: Plain Vocabulary including All Forms of Verbs . . . . . . . . . . . 233.2.2 Approach 2: Optimized Database with Semantic Analysis . . . . . . . . . . . . 243.2.3 Approach 3: Modified Levenshtein Distance . . . . . . . . . . . . . . . . . . . . 253.3 Name Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.1 Subjects with Emphasizing Tags . . . . . . . . . . . . . . . . . . . . . . . . . . 273.4 Blending Rule-based Translator with NMT . . . . . . . . . . . . . . . . . . . . . . . . 293.4.1 NMT Followed by Rule-based Translation . . . . . . . . . . . . . . . . . . . . . 313.4.2 Rule-based Translation Followed by NMT . . . . . . . . . . . . . . . . . . . . . 333.4.3 Either NMT or Rule-based Translator . . . . . . . . . . . . . . . . . . . . . . . 354 Performance Evaluation 374.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.1.1 Settings for Experimentation of Rule-based Translator . . . . . . . . . . . . . . 374.1.2 Setting of Experimentation with NMT . . . . . . . . . . . . . . . . . . . . . . . 384.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.2.1 Demography of Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2.2 Individual Sentences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.2.3 Literature-based Dataset . . . . . . . . . .</s>
<s>. . . . . . . . . . . . . . . . . . . . 414.2.4 Custom Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2.5 Full Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2.6 GlobalVoices Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2.7 Representativeness in Our Datasets . . . . . . . . . . . . . . . . . . . . . . . . 484.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.1 BLEU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.2 METEOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.3.3 TER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.4 Experimental Results and Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.4.1 Results from Our Proposed Rule-based Translator . . . . . . . . . . . . . . . . 584.4.2 Results on Name Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.4.3 Results on Optimized Verb Translation Technique . . . . . . . . . . . . . . . . 634.4.4 Overall Improvement with Name Identification and Optimized Verb TranslationTechnique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.4.5 Comparison with Google Translator . .</s>
<s>. . . . . . . . . . . . . . . . . . . . . . 664.4.6 Results from Our Different Blending Approaches . . . . . . . . . . . . . . . . . 674.4.6.1 Results using Literature-based Dataset . . . . . . . . . . . . . . . . . 674.4.7 Results using Full Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.5 Resource Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.5.1 Time Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.5.2 Memory Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.6 Overall Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.7 Overall Experimental Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.8 Extension of Our Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 844.9 Extending Our Study to A High-Resource Context . . . . . . . . . . . . . . . . . . . . 865 Analogy to Human Behaviour: A Casual Cross Checking to Our Proposed Meth-ods and Their Results 885.1 Demography of Survey Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.2 Survey Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 Avenues for Further Improvements 926.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937 Conclusion 95Bibliography</s>
<s>98List of Figures1.1 Applications of translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1 Faulty translations of Google Translator . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Architecture of Unsupervised Neural Machine Translation [11] . . . . . . . . . . . . . . 72.3 Architecture of chunk-based EBMT [6] . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Encoder-decoder architecture for English to German translator [50] . . . . . . . . . . . 102.5 Neural machine translation – example of a deep recurrent architecture [50] . . . . . . . 112.6 Neural machine translation – inference [50] . . . . . . . . . . . . . . . . . . . . . . . . 133.1 Mechanism of our proposed rule-based translator . . . . . . . . . . . . . . . . . . . . . 163.2 Input paragraph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Input sentences obtained from the paragraph shown in Figure 3.2 . . . . . . . . . . . . 163.4 Keywords of complex sentence under our considerations . . . . . . . . . . . . . . . . . 173.5 Splitting a complex sentence into a principal clause and a subordinate clause . . . . . 173.6 Bengali parse tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.7 Determining tense from a Bengali verb . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.8 Target (English) parse tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.9 Processing a complex sentence into two clauses representing two simple sentences . . . 223.10 Translation of a complex sentence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.11 Translation</s>
<s>of a compound sentence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.12 Several forms of two different verbs in Bengali . . . . . . . . . . . . . . . . . . . . . . . 233.13 Database optimization using semantic analysis on different forms of a verb . . . . . . . 253.14 Mapping between concatenated string and corresponding standard form of two differentverbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.15 Name identification in our proposed model . . . . . . . . . . . . . . . . . . . . . . . . . 273.16 Character-to-character(s) mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.17 Translating names from Bengali to English using our proposed phonetic mapping con-version system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.18 Faulty translation due to not identifying emphasizing tags . . . . . . . . . . . . . . . . 283.19 Separating emphasizing tags from subjects . . . . . . . . . . . . . . . . . . . . . . . . . 293.20 Separating emphasizing tags from subjects and corresponding translations . . . . . . . 293.21 Different blending techniques between rule-based translation and NMT as explored inthis study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.22 An example of NMT followed by rule-based translation . . . . . . . . . . . . . . . . . . 333.23 An example of rule-based translation followed by NMT . . . . . . . . . . . . . . . . . . 343.24 An example of choosing either NMT or rule-based translation . . . . . . . . . . . . . . 364.1 Demography of our datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . 394.2 Individual sentences for evaluating translations by our rule-based translator . . . . . . 404.3 Partial Bengali literature-based dataset (extracted from Al-Quran) . . . . . . . . . . . 424.4 Partial English literature-based dataset (extracted from Al-Quran) . . . . . . . . . . . 434.5 Partial Bengali vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.6 Partial English vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.7 Beginning of vocabulary files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.8 Partial Custom dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.9 Percentages of sizes of sentences in the full dataset . . . . . . . . . . . . . . . . . . . . 474.10 Percentages of sizes of sentences in the GlobalVoices dataset . . . . . . . . . . . . . . . 484.11 Representativeness in our literature-based dataset according to Zipf’s law . . . . . . . 494.12 Representativeness in our full dataset according to Zipf’s law . . . . . . . . . . . . . . 494.13 Unigram mappings between a candidate sentence and a reference sentence . . . . . . . 544.14 Sample translation of simple sentences (simple past tense) . . . . . . . . . . . . . . . . 584.15 Sample translation of simple sentences (simple future tense) . . . . . . . . . . . . . . . 594.16 Sample translation of a complex sentence . . . . . . . . . . . . . . . . . . . . . . . . . 594.17 Sample outputs of name identifications and translating names . . . . . . . . . . . . . . 624.18 Processing of subjects with emphasizing tags . . . . . . . . . . . . . . . . . . . . . . . 634.19 Outcomes of our first modification on Levenshtein distance algorithm . . . . . . . . . . 644.20 Further improvement over modified Levenshtein distance through removing commonsuffixes . . . . . . . . . . . . .</s>
<s>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.21 Snapshots of translations generated by Google Translator for our example sentences inTable 4.5 (collected on or before August 30, 2019) . . . . . . . . . . . . . . . . . . . . 664.22 NMT versus only rule-based METEOR score . . . . . . . . . . . . . . . . . . . . . . . 694.23 NMT versus only rule-based TER score . . . . . . . . . . . . . . . . . . . . . . . . . . 694.24 NMT versus NMT followed by rule-based METEOR score . . . . . . . . . . . . . . . . 704.25 NMT versus NMT followed by rule-based TER score . . . . . . . . . . . . . . . . . . . 704.26 NMT versus rule-based followed by NMT METEOR score . . . . . . . . . . . . . . . . 714.27 NMT versus rule-based followed by NMT TER score . . . . . . . . . . . . . . . . . . . 714.28 NMT versus NMT or rule-based METEOR score . . . . . . . . . . . . . . . . . . . . . 724.29 NMT versus NMT or rule-based TER score . . . . . . . . . . . . . . . . . . . . . . . . 724.30 Variation of BLEU scores with an increase in the number of implemented rules . . . . 734.31 Variation of METEOR scores with an increase in the number of implemented rules . . 744.32 Variation of TER scores with an increase in the number of implemented rules . . . . . 754.33 Comparison of normalized performance scores with an increase in the number of im-plemented rules for literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . . 764.34 Variation of BLEU scores with an increase in the number of implemented rules for fulldataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.35 Variation of METEOR scores with an increase in the number of implemented rules forfull dataset . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . . . . . . . . . . . . . . . 784.36 Variation of TER scores with an increase in the number of implemented rules for fulldataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.37 Comparison of normalized performance scores with an increase in the number of im-plemented rules for full dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.38 Comparison in variation of time with an increase in the number of implemented rulesfor literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.39 Comparison in variation of time with an increase in the number of implemented rulesfor full dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.40 Comparison in variation of memory consumption with an increase in the number ofimplemented rules for literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . 824.41 Comparison in variation of memory consumption with an increase in the number ofimplemented rules for full dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.42 Comparison between NMT and ‘NMT followed by rule-based’ approach in terms ofBLEU scores with different datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.43 Comparison between NMT and ‘NMT followed by rule-based’ approach in terms ofBLEU scores with respect to an increase in the number of training steps . . . . . . . . 875.1 Demography of survey participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.2 Results of survey participants’ responses . . . . . . . . . . . . . . . . . . . . . . . . . . 91List of Tables3.1 Initial tagging table for tokens . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . . . . 183.2 Token translation using vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 Final tagging with translation containing necessary information about each token . . . 193.4 Commonly-used Bengali suffixes representing tenses . . . . . . . . . . . . . . . . . . . 203.5 Modifying verbs based on tenses, persons, and numbers . . . . . . . . . . . . . . . . . 213.6 Database table for translating a verb having different forms . . . . . . . . . . . . . . . 244.1 Summary of the different datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.2 Experimental results for some example simple sentences . . . . . . . . . . . . . . . . . 604.3 Experimental results for some example complex sentences . . . . . . . . . . . . . . . . 614.4 Experimental results for some example compound sentences . . . . . . . . . . . . . . . 614.5 Improvement with name identification and optimized verb translation technique interms of BLEU score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.6 Comparison between performances of our rule-based translator and Google Translatorfor some example sentences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.7 Comparison among different translation approaches . . . . . . . . . . . . . . . . . . . . 674.8 Comparison as per BLEU scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.9 Comparison among different translation approaches for full (combined) dataset . . . . 764.10 Time overheads of rule-based, NMT, and ‘NMT followed by rule-based’ for literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.11 Comparison among different translation approaches for literature-based dataset . . . . 834.12 Comparison among different translation approaches for full dataset . . . . . . . . . . . 834.13</s>
<s>Overall percentage (%) improvement over different parameters with respect to NMTfor literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83xiv4.14 Overall percentage (%) improvement over different parameters with respect to NMTfor full dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.15 Overall percentage (%) improvement over different parameters with respect to rule-based approach for literature-based dataset . . . . . . . . . . . . . . . . . . . . . . . . 844.16 Overall percentage (%) improvement over different parameters with respect to rule-based approach for full dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.17 Comparison among different translation approaches considering SMT as baseline system 854.18 Comparison among different translation approaches for a high-resource context . . . . 865.1 Mapping between human translation approaches and our proposed translation approaches 90Chapter 1IntroductionHuman beings have been communicating using various spoken languages since their earliest days onthe earth. Languages can express thoughts on an unlimited number of topics, e.g., social interaction,religion, past, future, etc. While many believe the number of languages in the world is about 6500,there are actually around 7106 living languages in the world [46]. Although this number might be thelatest count, there is no clear answer on the exact number of languages that still exist. These hugenumber of languages are spread all over the world. For example, around 230 languages are spoken inEurope, whereas over 2000 languages are spoken in Asia [47].Every human language has a vocabulary consisting of thousands of words, which are primarily builtup from several dozens of speech sounds. More remarkable point here to be noted is that every normalchild basically learns the whole system (mother tongue) just from hearing others using it. However,apart from the mother tongue, other languages are generally learnt in a more systematic process.Besides, in all languages, there are many words that may have multiple meanings and also somesentences may use different grammatical structures to express the same meaning [3]. This challenge,in turn, makes it immensely difficult to perform semantic analysis based translation between a pairof languages. Moreover, the task of translation experiences the top level of difficulty when the pair oflanguages contain a source language that is less explored in terms of having substantially large parallelcorpus [1]. Bengali represents an example of such a source language. Therefore, it remains a greatchallenge to do the right semantic analysis to properly recognize any sentence of such a language.To this context, in this thesis, we study Bengali to English machine translation</s>
<s>by semantic basedparts of speech tagging, verb identification and stemming, and name identification by lemmatization.CHAPTER 1. INTRODUCTION 2We perform our study through exploring rule-based translation and neural machine translation - bothin isolation and in combination, through applying different blending approaches.1.1 Motivation behind Our WorkNatural languages such as English, Spanish, and even Hindi are rapidly progressing in machine trans-lation. While progress has been made in language translation software and allied technologies, theprimary language of the ubiquitous and all-influential World Wide Web remains to be English [48].English is mostly the language of latest applications, programs, new freeware, manuals, shareware,peer to peer conversation, social media networks, and websites [48].Figure 1.1: Applications of translationMillions of immigrants who travel the world from non-English-speaking countries every year, facethe necessity of learning English to communicate in the language, since it is very important to enterand ultimately succeed in mainstream English speaking countries. The success gets comprehendedwhen the learning covers all forms of reading, writing, speaking, and listening that eventually realizethe process of translation encompassing a diversified set of applications (Figure 1.1).Working knowledge of the English language can create many opportunities in international marketsCHAPTER 1. INTRODUCTION 3and regions. However, similar to many other non-English-speaking countries, a major group of Bengalispeaking people from Bangladesh and India lacks proficiency in English [4]. This crisis is gettingboosted over the period of time, as there is no well-developed translator till now for Bengali to Englishtranslation. Therefore, the importance of an efficient Bengali to English translator is noteworthy.1.2 Approach of Our StudyWe present an overview on the approach of our study in this section. Our initial focus is to explorebuilding a rule-based translator. To do so, we implement some of the Bengali grammatical rules inour system along with detection of person, number, tense, etc., using semantic analysis. Besides,we accomplish optimization of Bengali verbs and identification of names as subjects to improve theperformance of our rule-based translator. After that, we explore and implement the classical NMT(Neural Machine Translator). Then, we integrate rule-based translator and NMT using different ap-proaches to investigate the best-possible translation performance. We measure the performance usingthree standard metrics of machine translation. Besides, we extend our experimentation (similar toNMT) by exploring another popular data-driven machine translation technology, Statistical MachineTranslation (SMT), as it was used by popular Google Translator just before NMT [9]. Finally, wepresent the results of our experimentation both statistically and graphically, and also analyze themin details.Next, we present an outline of our thesis. In Chapter 2, we highlight the related work in thefield of natural language processing, specially on Bengali-English translation. Then, we briefly discussclassical neural machine translation approach in Chapter 2, which is a very important part of ourstudy. Next, in Chapter 3, we present our implemented rule-based translator for Bengali to Englishtranslation. In addition to that, we propose our verb identification and memory optimization tech-niques in Chapter 3. Here, we implement modified Levenshtein’s distance algorithm for root verbdetection (Chapter 3.2.3). Finally, we investigate the performance of our proposed translator usingthree different blending approaches, which we discuss in Chapter</s>
<s>3.In Chapter 4, we discuss the performance evaluation of our proposed mechanisms. Here, first, weshow our experimental settings and different datasets. Next, we discuss our performance evaluationmetrics (BLEU [19], METEOR [20], and TER [21]). Then, we present various experimental results(simulation outputs, graphs, tables, etc.) and our findings in Chapter 4. Furthermore, we perform aCHAPTER 1. INTRODUCTION 4casual cross checking to our results with respect to human behaviour by conducting an online survey toidentify which method(s) people generally follow (perhaps subconsciously) to translate from Bengalito English. Afterwards, we unfold our possible future studies in Chapter 6. In the end, we concludeby summarizing the problem and our contributions in this study in Chapter 7.1.3 Our ContributionsBased on our work, our main contributions in this study are as follows:• First, we propose a rule-based Bengali to English translator that implements some basic gram-matical rules for Bengali to English translation. Our rule-based translator mainly focuses onBengali grammar with some exceptional approaches such as finding standard form of verbs,identifying unknown words (names) as subjects, etc. Apart from processing simple sentences,our rule-based translator also considers basic complex and compound sentences. Besides, ourrule-based translator identifies subjects with emphasizing tags properly to improve its overalltranslation performance.• Afterwards, we integrate rule-based translator with existing NMT using different possible ap-proaches. To do so, first, we implement the classical NMT. Designing a parallel corpus containingBengali-English sentence pairs for training NMT is one of the toughest challenges that we facesince Bengali is an extremely low resource language. Next, we blend our rule-based translatorand NMT in three different ways to investigate the best-possible blending approach. Afterwards,similar to NMT, we implement SMT, and blend our rule-based translator with it to verify ourbest-possible blending approach.• Finally, we perform the performance evaluation for the proposed rule-based translator, classi-cal NMT, and their integrated solutions using three standard metrics - BLEU, METEOR, andTER. We present the results for rule-based translator and NMT both in isolation and in combi-nation. We also perform comparative analysis of the results among all the proposed approachesboth statistically and graphically. Besides, we also show performance scores for SMT and itsintegrated solutions with rule-based translator as an extension of our experimental results.Chapter 2Background and Related WorkBengali, being among the top ten languages worldwide, lags behind in some crucial areas of research inmachine translation such as parts-of-speech (POS) tagging, text categorization and contextualization[35], syntax and semantic checking, etc. Most noteworthy previous studies in this regard includeExample-based Machine Translation (EBMT) [4], phrase-based machine translation [5], syntactictransfer, and use of syntactic chunks as translation units [6]. However, these studies lack in processingBengali words semantically. Besides, although significant research work can be found on English toBengali translation [2][7], very few work has been performed on translating on the other way, i.e.,from Bengali to English [3][8]. Popular translators such as Google, Bing, Yahoo Babel Fish, etc.,often perform very poorly when they translate from Bengali to other languages. Google translator,the most popular one among them, uses neural machine translation (NMT) approach with RNN atpresent [9][10].NMT has emerged as the most promising</s>
<s>machine translation approach in recent years, showingsuperior performance on public benchmarks [1][11]. It is an end-to-end learning approach for auto-mated translation, with the potential to overcome many of the weaknesses of conventional translationsystems. In spite of the recent success of NMT in standard benchmarks, the lack of large parallelcorpora poses a major practical problem for many language pairs such as Bengali-English [12]. This iswhy, NMT performs reasonably well when it translates among the most popular languages, however,it often makes elementary mistakes while translating languages that are less known to the systemsuch as Bengali as shown in Figure 2.1 [14][15]. Focusing on rule-based translation in such a casemight be a solution, which is yet to be explored in the literature. Moreover, blending NMT with suchCHAPTER 2. BACKGROUND AND RELATED WORK 6rule-based translator is yet another aspect to be investigated till now.(a) Simple sentences(b) Complex sentence and compound sentenceFigure 2.1: Faulty translations of Google TranslatorCHAPTER 2. BACKGROUND AND RELATED WORK 72.1 Existing Research StudiesWu et al., [9] presented GNMT, Google’s Neural Machine Translation system, with the objectives ofreducing computational cost both in training and in translation inference, and increasing parallelismand robustness in translation. However, this approach solely relies on availability of significantly largeparallel corpus and makes elementary mistakes while translating low-resource languages [36].Artetxe et al., [11] removed the need of parallel data and proposes a novel method to train an NMTsystem with the objectives of relying on monolingual corpora only and profiting from small parallelcorpora. Figure 2.2 reflects the architecture of this approach. Here, for each sentence in language L1,Figure 2.2: Architecture of Unsupervised Neural Machine Translation [11]the system is trained alternating two steps: 1) denoising, which optimizes the probability of encodinga noised version of the sentence with the shared encoder and reconstructing it with the L1 decoder,and 2) on-the-fly back-translation, which translates the sentence in inference mode (encoding it withthe shared encoder and decoding it with the L2 decoder), and then optimizes the probability ofencoding this translated sentence with the shared encoder and recovering the original sentence withthe L1 decoder. Training alternates between sentences in L1 and L2, with analogous steps for thelatter. However, this promising approach still falls much behind the performance level of classicalCHAPTER 2. BACKGROUND AND RELATED WORK 8NMT. Gangadharaiah et al., [5] converted CNF to normal parse trees using bilingual dictionary withthe objective of generating templates for aligning and extracting phrase-pairs for clustering. However,this work does not consider stemming of different forms of verbs and translation of unknown words.Besides, this approach also relies on availability of significantly large parallel corpus.Saha et al., [28] reported an Example based Machine Translation System (EBMT) with the ob-jective of translating news headlines from English to Bengali. However, this work was a specializedmethodology only for newspaper headlines and did not consider the development of a Bengali lexiconwith necessary tags. Kim et al., [6] used syntactic chunks as translation units with the objective ofproperly dealing with systematic translation for insertion or deletion of words between two distantlanguages. Figure 2.3 reflects the architecture of</s>
<s>their proposed chunk-based EBMT. According toFigure 2.3: Architecture of chunk-based EBMT [6]the architecture, first, given an input sentence, the system finds chunk (a sequence of words) sequencematches, and a chunk aligner finds their translations. Next, when no chunk match or no chunk align-ment is found, it finds word/phrase matches, and uses a phrasal aligner to find the translations ofthem. Afterwards, it puts chunk translations and word/phrase translations into a lattice. Besides, foreach translation, it keeps track whether the translation is from the chunk alignment or not. Finally,it performs standard beam decoding to find the best translation. However, this approach fails toaddress some basic grammatical rules during translation as it does not apply any specific rule duringcombining the chunk translations generated by the chunk aligner. Besides, this approach does notCHAPTER 2. BACKGROUND AND RELATED WORK 9consider translating unknown words (not found in its vocabulary).Additionally, there exist other research studies on NLP. For example, Souvik et al., [2] proposeda solution based on parse tree, Naskar et al., [31] handled prepositions in English, Dasgupta et al., [7]proposed another approach based on parse tree, etc. However, these techniques considered English-to-Bengali context only, not focusing on Bengali-to-English. Rahman et al., [3] explored statisticalapproach for Bengali-to-English translation. Besides, both Rahman et al., [8] and Alam et al., [23]explored a basic rule based approach for the same. However, these techniques either depended on alarge corpus or omitted some basic grammatical features such as stemming and lemmatization. Apartfrom this limitation, these techniques are yet to consider an integration between rule-based translationand classical NMT.Our work adopts implementation of GNMT as the classical NMT. Therefore, we discuss GNMTin the next section.2.2 Google’s Neural Machine Translation (GNMT) ModelNeural Machine Translation (NMT) is basically an end-to-end learning approach for automated trans-lation. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mappingfrom input texts to associated output texts.2.2.1 Background of GNMTGoogle Translator, one of the most popular and widely available translators, earlier used StatisticalMachine Translator (SMT) to build the multilingual translation system. SMT systems are not tailoredto any specific pair of languages. In spite of being so promising and generalized, this approach usuallydoes not work well for language pairs having significantly different word orders. Besides, SMT resultsmay have superficial fluency that masks translation problems as SMT considers only a few words(chunk) from a source sentence at a time during translation [60]. Therefore, Google has movedtowards NMT approach recently. Google’s NMT model was first proposed by Wu et al., in 2016,which became a breakthrough in the field of NLP with the potential of addressing many shortcomingsof traditional translation systems.CHAPTER 2. BACKGROUND AND RELATED WORK 102.2.2 ArchitectureBack in the old days, traditional phrase-based translation systems performed their task by breakingup source sentences into multiple chunks, and then translating them phrase-by-phrase. This led toless fluency or accuracy in the translation outputs and was not quite like how we, humans, performthe task of translation. We generally read the entire source sentence, understand its meaning, andthen produce a translation. Neural Machine</s>
<s>Translation (NMT) attempts to closely mimic that.Specifically, an NMT system first reads the source sentence using an encoder to build a “thought”vector (a sequence of numbers that represents meaning of the sentence). Then, a decoder processesthe sentence vector to perform a translation, as illustrated in Figure 2.4. This is often referred as theencoder-decoder architecture. In this manner, NMT addresses the local translation problem in thetraditional phrase-based approach. Thus, NMT can capture long-range dependencies in languages,e.g., gender agreements, syntax structures, etc., and produce much more fluent translations as demon-strated by GNMT systems.Figure 2.4: Encoder-decoder architecture for English to German translator [50]NMT models vary in terms of their exact architectures. A natural choice for sequential data isthe Recurrent Neural Network (RNN), used by most NMT models. Usually an RNN is used for boththe encoder and the decoder. The RNN models, however, differ in terms of: (a) directionality –unidirectional or bidirectional [50]; (b) depth – single or multi-layer [50]; and (c) type – often eithera vanilla RNN, a Long Short-term Memory (LSTM), or a gated recurrent unit (GRU) [50]. In ourexperimentation, a deep multi-layer RNN has been considered, which is unidirectional and uses LSTMas a recurrent unit. An example of such a model is shown in Figure 2.5.In this example, a model is built to translate a source sentence “I am a student” into a target(German) sentence “Je suis étudiant”. At a high level, the NMT model consists of two recurrentneural networks: the encoder RNN simply consumes the input source words without making anyprediction; the decoder, on the other hand, processes the target sentence while predicting the nextCHAPTER 2. BACKGROUND AND RELATED WORK 11Figure 2.5: Neural machine translation – example of a deep recurrent architecture [50]words. We present different components of the NMT’s architecture below.2.2.2.1 Embedding LayerInitially, we need to train the NMT system using the bilingual parallel corpus. The model must firstlook up the source and target embeddings to retrieve the corresponding word representations. For thisembedding layer to work, NMT chooses a vocabulary for each language first. Usually, NMT selectsa vocabulary of size V, and treats only the most frequent V words as unique. It converts all otherwords to an “unknown” (<unk>) token and all get the same embedding. NMT learns the embeddingweights, one set per language, during training with the parallel corpus.2.2.2.2 EncoderNMT can use one or more LSTM layers to implement the encoder model. The output of this modelis a fixed-size vector that represents the internal representation of the input sequence. The numberof memory cells in this layer defines the length of this fixed-sized vector.CHAPTER 2. BACKGROUND AND RELATED WORK 12Once retrieved, NMT then feeds the word embeddings as input into the main network, whichconsists of two multi-layer RNNs – an encoder for the source language and a decoder for the targetlanguage. These two RNNs, in principle, can share the same weights. However, in practice, the modeloften uses two different RNN parameters, which do a better job when fitting large training datasets.Here, the encoder RNN uses zero vectors as its</s>
<s>starting states.2.2.2.3 DecoderThe decoder must transform the learned internal representation of the input sequence into the correctoutput sequence. NMT can also use one or more LSTM layers to implement the decoder model. Thismodel reads the fixed sized output generated by the encoder model. The decoder also needs to haveaccess to the source information. Therefore, NMT simply initializes the decoder with the last hiddenstate of the encoder. Thus, as shown in Figure 2.5, NMT passes the hidden state of the last sourceword “student” to the decoder side.2.2.2.4 Projection LayerThe Projection layer is a dense matrix to turn the top hidden states to logit1 vectors of dimensionhaving the vocabulary size. Projection layer maps the discrete word indices of an n-gram contextto a continuous vector space as shown in Figure 2.6. NMT models share it such that for contextscontaining the same word multiple times, the same set of weights apply to form each part of theprojection vector.2.2.2.5 Inference: Generating TranslationsOnce NMT has finished the training, it can generate translations from given previously unseen sourcesentences. This process is called inference. There is a clear distinction between training and inference(testing) since, at inference time, we only have access to the given source sentence. Afterwards, NMTperforms the decoding.The idea is simple as illustrated in Figure 2.6. Firstly, NMT encodes the source sentence in thesame way as done during training. Next, it starts decoding as soon as it receives the starting symbol(<s>). Then, for each timestep on the decoder side, NMT treats the RNN’s output as a set of logits.1Logits generally refers to the unnormalized final scores of a machine learning model. We apply softmax to it to geta probability distribution over the classes. In our model, logits refers to the scores of words in vocabulary to appear inthe translation.CHAPTER 2. BACKGROUND AND RELATED WORK 13Figure 2.6: Neural machine translation – inference [50]NMT then chooses the most likely word, the id associated with the maximum logit value, as theemitted word. For example, in Figure 2.6, the word “moi” has the highest translation probability inthe first decoding step. Afterwards, NMT feeds this word as an input to the next timestep. Thisstep is what makes inference different from training. Finally, the process continues until the decoderproduces the end-of-sentence marker (</s>) as an output symbol.GNMT system approaches the accuracy achieved by average bilingual human translators on someof the designed test sets. In particular, compared to the previous phrase-based production system,this GNMT system delivers roughly a 60% reduction in translation errors on several popular languagepairs [50]. However, even such a promising approach exhibits some major weaknesses. Three inherentweaknesses of NMT are: 1) its slower training and inference speed, 2) ineffectiveness in dealing withrare words, and 3) sometimes failure to translate all words in the source sentence. Its performancegenerally improves with the increased size of dataset (parallel corpus). We investigate the integrationof NMT with rule-based approach to improve the overall performance in the next chapters.CHAPTER 2. BACKGROUND AND RELATED WORK 142.3 Limitations of Existing Research StudiesAlthough there exists a significant number of</s>
<s>research in the field of language processing, Bengaliremains little explored in the literature. Therefore, in this study, we specifically address some majorlimitations in Bengali to English translation along with Bengali language processing to some extent.These major limitations are discussed as follows.• None of the existing studies focuses on integration of rule-based translator with any data-drivenmachine translator (NMT or SMT) for translation between any language pair. Investigationof possible outcomes of such integration or blending between these two translation approachesis completely absent in the literature. However, our survey (in Chapter 5) suggests that mostpeople generally prefer using both NMT (or SMT)-like and grammatical rules based translationswhile translating from one language to another in their daily life. This human behaviour pointstowards a prospect of exploring integration between rule-based translator and NMT in machinetranslation.• Apart from this, a large parallel corpus for Bengali-to-English machine translation is yet to beavailable. However, the performance of NMT solely relies on availability of significant amountof training data. The more example translation NMT sees (during training), the better ittranslates (infers). However, not a single corpus containing a substantial number of Bengali-English sentence pairs is available for NMT, which significantly limits the performance of NMTin translating Bengali to English.• Besides, existing techniques do not consider finding stems of different forms of Bengali verbs.As Bengali verbs can take multiple forms based on tenses, we need to detect the standard formof verb by stemming for optimizing total memory consumption. Current literature does notconsider this issue not only for Bengali but also for other languages.• In addition to that, existing studies cannot properly recognize and translate words, which wecannot find in specialized vocabulary such as names of people. Besides, Bengali sentences maycontain emphasizing tags attached to the subjects as suffixes, which leads to faulty detection ofsubjects as names. Existing studies have not explored this issue too.Chapter 3Proposed MethodologyOur work initially focuses on building a rule-based translator [15]. Next, our target is to exploreand implement the classical NMT, i.e., GNMT. To do so, we collect and build datasets (Bengaliand English language pair) of different sizes from different sources. Subsequently, after implementingboth rule-based translator and classical NMT in isolation, we integrate these two translators usingdifferent approaches to investigate the best possible translation performance. We present our proposedmechanisms and algorithms next in details.3.1 Rule-based TranslatorOur rule-based translator initially focuses on implementation of simple sentences. Here, simple sen-tence analysis and recognition is the preliminary step which leads to the advancement of our systemtowards implementation of complex and compound sentences later. However, analyzing and recogniz-ing a simple sentence of a language requires enormous knowledge on that particular language. In thisstudy, our rule-based translator particularly implements some basic grammatical rules for Bengali toEnglish translation. Figure 3.1 illustrates the mechanism of our proposed rule-based translator. Asshown in the figure, our proposed mechanism for implementing the rule-based translator basicallyconsists of six major steps. We elaborate each of the steps in the following subsections.3.1.1 Step-1: Input of Bengali TextThe first step is to take an input sentence. Here, the input sentence is</s>
<s>a Bengali sentence. If we geta paragraph as input, we recognize sentences by splitting the input paragraph through the sentenceCHAPTER 3. PROPOSED METHODOLOGY 16Figure 3.1: Mechanism of our proposed rule-based translatorterminating delimiters. Our considered sentence terminating delimiters are - ‘|’ and ‘;’. The inputsentence is, then, fed to the tokenizer for token identification and further processing. Figure 3.2 showsan example of how a Bengali paragraph can appear as input.Figure 3.2: Input paragraphWe split the paragraph into several independent sentences. For example, the paragraph in Fig-ure 3.2 gets split into nine sentences as shown in Figure 3.3. We consider these sentences as separateinput sentences (that we need) to translate one by one. Therefore, we tokenize each sentence next.Figure 3.3: Input sentences obtained from the paragraph shown in Figure 3.2In our rule-based system, we cover mostly simple sentences along with basic complex and com-CHAPTER 3. PROPOSED METHODOLOGY 17pound sentences. To differentiate simple sentences from complex and compound sentences, we pri-marily check whether any of the keywords of complex or compound sentence is present in the inputsentence. The rationale behind this consideration is the fact that a complex sentence is formed whenwe join a principal clause and a subordinate clause with a connective. It can have one or more depen-dent clauses (also called subordinate clauses). Since a dependent clause cannot stand on its own asa sentence, complex sentences must also have at least one independent clause. Therefore, a complexsentence is basically a union of two simple sentences that come out of the clauses. Our system canrecognize the following popular basic keywords of a complex sentence as shown in Figure 3.4.Figure 3.4: Keywords of complex sentence under our considerationsIf our system determines a complex sentence by matching any of these keywords then it splits thesentence into two grammatical clauses - 1) Principal clause, and 2) Subordinate clause. Each clausethen acts as an independent simple sentence, which then passes to the tokenizer in the next phase forfurther processing. For example, we split a complex sentence into the clauses as shown in Figure 3.5.Figure 3.5: Splitting a complex sentence into a principal clause and a subordinate clause3.1.2 Step-2: Analysis of Sentence Structure and TokenizationNext, we tokenize the sentences identified or splitted in the previous section. Our system, here,considers each Bengali word of a sentence as a token, and tags the tokens in various ways such asposition, person, number, Parts of Speech (PoS), and tense. For example, let us consider the followinginput Bengali sentence:Our system will tokenize this input sentence as follows:CHAPTER 3. PROPOSED METHODOLOGY 18Besides, our system initially determines the position of a token by analyzing a very basic gram-matical rule for Bengali sentence formation - “Subject + Object + Verb”. We present it in Figure 3.6.Figure 3.6: Bengali parse treeThe figure illustrates a Bengali parse tree that presents how our system recognizes the tokens withtheir roles in the sentence. Here, we have some predefined commonly-used nouns, pronouns, verbs,etc., in our system that are presented in the parse tree as “Noun”, “Pro”, “VP”, etc., respectively.Our system</s>
<s>accomplishes this task by its token tagging process, which we discuss next.3.1.3 Step-3: Token TaggingOur system performs the task of token tagging using the information from grammatical set of rulesfor Bengali sentences. In our token tagging, we identify position, PoS, number, and person for each ofthe tokens. Table 3.1 illustrates an example of initial token tagging for our previous input sentence.Token Position POS Number PersonSub PRO 1 3Obj Noun Null NullVrb VP Null NullDelim Null Null NullTable 3.1: Initial tagging table for tokensCHAPTER 3. PROPOSED METHODOLOGY 193.1.4 Step-4: Word-by-word TranslationOur rule-based translation system consists of a vocabulary containing around 1,000 commonly-usedBengali-English word pairs. Our system performs direct translation using this vocabulary and neces-sary information extracted from the token tagging. Table 3.2 shows how our system performs wordby word translation for each of the tokens.Token TranslationRiceEatTable 3.2: Token translation using vocabularyThis results in our updated and final token tagging with translation as shown in Table 3.3.Token Position POS Number Person TranslationSub PRO 1 3 HeObj Noun Null Null RiceVrb VP Null Null EatDelim Null Null Null .Table 3.3: Final tagging with translation containing necessary information about each token3.1.5 Step-5: Apply Necessary Words and SuffixesNext, our system can determine the tense of an input sentence by analyzing the suffixes of Bengaliverbs. We do this through maintaining a list of commonly-used Bengali suffixes mapped to a particulartense or tense code. Table 3.4 shows only a partial scenario of how we map the suffixes to differenttenses. At this stage, we need to deal with Bengali verbs having multiple forms since one standardBengali verb can take multiple forms based on its tenses in different sentences. Therefore, optimizationof vocabulary for verbs becomes an issue in terms of total memory consumption by the system. Wewill address this issue later in details in Chapter 3.2. Here, we present an example in Figure 3.7 wherea Bengali verb has been processed to determine the tense by removing its suffix.CHAPTER 3. PROPOSED METHODOLOGY 20Table 3.4: Commonly-used Bengali suffixes representing tensesFigure 3.7: Determining tense from a Bengali verbAfter we detect the tense, our system modifies the translated English verb by adding necessarysuffixes and words (auxiliary verbs) using information extracted from the token tagging such as numberand person of subject.Table 3.5 shows how our system modifies the translated verbs depending on tense, number, andperson of subject obtained from the token tagging table.CHAPTER 3. PROPOSED METHODOLOGY 21Tense TenseCodePerson Number Verb Modification(adding words and suffixes)Simple Present 11 11/ 21/ 2NullNullAdd ‘es/ s’NullPresent Continuous 12 11/ 2am + ‘ing’ formare + ‘ing’ formare + ‘ing’ formis + ‘ing’ formare + ‘ing’ formPresent Perfect 13 11/ 21/ 2have + ‘past participle’ formhave + ‘past participle’ formhas + ‘past participle’ formhave + ‘past participle’ formSimple Past 21 1/ 2/ 3 1/ 2 ‘past’ formPast Continuous 22 1/ 2/ 31/ 2/ 3was + ‘ing’ formwere + ‘ing’ formPast Perfect 23 1/ 2/ 3 1/ 2 had + ‘past participle’ formSimple Future 31 12/ 31/ 21/ 2shall + verbwill + verbFuture Continuous 32 12/ 31/ 21/ 2shall be + ‘ing’</s>
<s>formwill be + ‘ing’ formFuture Perfect 33 12/ 31/ 21/ 2shall have + ‘past participle’ formwill have + ‘past participle’ formTable 3.5: Modifying verbs based on tenses, persons, and numbers3.1.6 Step-6: Rearrange Words by Applying Grammatical RulesOur system has generated all the words of the translated sentence by now. However, we need to arrangethese words according to grammatical rules of the target language, i.e., English. More specifically, weneed to apply grammatical rules for building an English sentence as we are translating into English.Basic rule for building a simple sentence in English is - “Subject + Verb + Object”. Therefore, oursystem arranges the translated words accordingly using the token tagging table. Figure 3.8 illustrateshow the target sentence gets an ordered list of translated words from the input sentence. Here, theprevious input sentence has been taken as an example for generating the translation.CHAPTER 3. PROPOSED METHODOLOGY 22Figure 3.8: Target (English) parse treeFor complex and compound sentences, our system generates two translated sentences for twodifferent clauses (similar to simple sentences) separately as discussed earlier. The following sentencein Figure 3.9 is an example to show how our system processes a complex sentence by splitting itinto two simple sentences first. Afterwards, our system adds necessary English merging keywords forcorresponding Bengali merging keywords at right places so that these two simple sentences merge toform the target complex or compound sentence as shown in Figure 3.10 and Figure 3.11 respectively.Figure 3.9: Processing a complex sentence into two clauses representing two simple sentencesCHAPTER 3. PROPOSED METHODOLOGY 23Figure 3.10: Translation of a complex sentenceFigure 3.11: Translation of a compound sentence3.2 Verb Identification and Memory OptimizationIn our proposed rule-based translation system, we store the verbs in a database along with the otherwords as a part of the vocabulary of the intended (Bengali) language. It is worth mentioning that,in Bengali, one verb may have multiple representations based on tense and subject of a sentenceas shown in Figure 3.12. The figure shows an example of different forms taken by each of the twodifferent verbs in Bengali, which correspond to ‘eat’ and ‘play’ in English respectively.Figure 3.12: Several forms of two different verbs in BengaliWe, therefore, explore three different approaches for translating such verbs efficiently in terms ofmemory consumption and accuracy [13]. We will discuss these approaches in the subsequent sections.Here, one approach improves over another in a sequential manner as we progress onward.3.2.1 Approach 1: Plain Vocabulary including All Forms of VerbsThe first one is the simplest and most straightforward proposed approach in our implementation aspresented in [14]. Here, similar to all other general words (nouns, pronouns, etc.), we simply insertCHAPTER 3. PROPOSED METHODOLOGY 24all different forms of each standard verb with their standard translation as separate entries in thedatabase (vocabulary). Table 3.6 illustrates how several entries for a standard verb are incorporatedin the database or vocabulary.Table 3.6: Database table for translating a verb having different formsUsing this table, we can find the standard translated verb (‘eat’ in this case) for all differentforms of the verb, which we then modify according to</s>
<s>the tense and subject of the sentence applyingsemantic analysis as discussed earlier. Let us consider an example of a translated verb ‘eat’. We willprocess the verb as ‘is eating’, ‘ate’, ‘has eaten’, etc., based on the semantic analysis (using tokentagging table) of the sentence. This approach guarantees 100% accuracy in terms of verbs translation.However, memory consumption becomes a major issue due to the repetitive insertions of one standardverb in various forms resulting in wastage of considerable chunk of space.3.2.2 Approach 2: Optimized Database with Semantic AnalysisOur next proposed approach for verb identification offers an immediate improvement over our previousapproach as presented in [14]. As discussed earlier, if it is required to store the word translation foreach form of the same verb then the database will become very large due to the repetitive insertionsleading to massive unnecessary memory consumption. However, we can avoid such multiple insertionsof the same verb (having different forms) in our database through an optimization technique withsemantic analysis approach.Here, we store only the standard verb in the vocabulary. Afterwards, we apply semantic analysisto detect the standard form from the other forms of the verb depending on number, person, andtense as shown in Figure 3.13. The figure shows how one word (standard verb) can take two differentforms and suggests insertion of only corresponding particular standard word in the database omittingthe need of inserting all of its different forms. This approach, thus, avoids multiple insertions in thedatabase for the same verb with multiple forms.CHAPTER 3. PROPOSED METHODOLOGY 25Figure 3.13: Database optimization using semantic analysis on different forms of a verbHowever, to detect that standard verb from its other different forms, we concatenate all thedifferent forms of the verb as a single large string and insert it into another table with its standardform as a single entry. Figure 3.14 shows a couple of examples of such strings with correspondingstandard form.Figure 3.14: Mapping between concatenated string and corresponding standard form of two differentverbsThis approach significantly improves over the previous approach in terms of searching time. Be-sides, this approach avoids overheads for multiple entries for one standard verb. Accuracy of detectingthe verb still remains at the maximum (100%) in this approach too. However, it offers no significantimprovement in terms of overall memory consumption since it ultimately stores all the forms (as asingle string) of a standard verb in the database.3.2.3 Approach 3: Modified Levenshtein DistanceSignificant improvement is achieved with our final approach for verb identification in terms of bothmemory consumption and computational time as presented in [13]. In this approach, the translationof a verb is performed using a hash table. The key-value pair in the hash table consists of only theCHAPTER 3. PROPOSED METHODOLOGY 26standard form of verbs of both source and target languages. In order to translate effectively, it isrequired to recognize these standard forms of verbs from their non-standard forms. For this purpose,a modified version of a popular string similarity measurement algorithm [17] is used in this approach,which is known as Levenshtein Distance [18].As presented in [13], a non-standard form of</s>
<s>a Bengali verb may have prefix and suffix assimilatedinto it based on tense, number, person, etc. Accordingly, instead of directly trying to match a non-standard form of verb with its standard form, first, the non-standard form is broken down into its rootword (stemming) in this approach. Afterwards, that root word (stem) is matched with its standardform. Finally, translation of that non-standard form of verb can be obtained from its standard form.Detail of the whole approach can be found in [13].3.3 Name IdentificationOne major and unique improvement achieved by our rule-based translation system is dealing withunknown words (not found in vocabulary), specifically name identification and corresponding trans-lation technique [14]. Here, we need to identify the names of persons or objects to properly analyzethe tokens obtained from an input sentence. Names of people are generally considered as nouns inthe sentences, which dictate person, number, and gender of the subject if they appear as the subjectsof the sentences. Therefore, properly identifying the names as subjects is very crucial for accuratetranslation. Google translator sometimes completely misunderstands the Bengali names of persons.As a consequence, it fails to recognize number and person of the subject, which ultimately leads tofailure in translating even very basic Bengali sentences as shown in Figure 2.1 earlier. Besides, it isnot practical to translate names by using any database containing the vocabulary.In our proposed model, our system first recognizes the names by applying its specific grammaticalrule set to identify the subjects not found in vocabulary. We show an example of this procedure ofname identification while translating from Bengali to English in Figure 3.15. When our system detectsthe names as subjects in this way, it recognizes the token as a subject with tags - third person andsingular number. Then our system can modify the verbs by adding prefixes and/or suffixes accordinglyas discussed earlier. However, we are left with no translation for the names as names cannot appearin vocabulary. Therefore, we develop a Bengali to English phonetic mapping conversion system in ourproposed system, which enables translation of the names (unknown words) from Bengali to English.CHAPTER 3. PROPOSED METHODOLOGY 27Figure 3.15: Name identification in our proposed modelHere, first, our system performs direct character-to-character(s) translation using a predefined set ofcharacter-to-character(s) mappings between two languages as shown in Figure 3.16.Figure 3.16: Character-to-character(s) mappingsNext, our system modifies the previously generated translation by introducing some missing char-acters (if any). To do so, our system checks whether two consonants appear in the translation con-secutively. If our conversion system detects any such case, it inserts a vowel - ‘a’ or ‘o’ between thosetwo consonants since this is how we generally translate Bengali names to English names. Figure 3.17presents two example Bengali names translated using our proposed name translation technique. Thisproposed system cannot work in case of having emphasizing tags, which needs a specialized treatmentas presented in the next subsection.3.3.1 Subjects with Emphasizing TagsThere are different emphasizing tags used in many languages. The emphasizing tags, associated withthe names or pronouns, are not actually any part of the main word (name or pronoun).</s>
<s>Rather theyCHAPTER 3. PROPOSED METHODOLOGY 28Figure 3.17: Translating names from Bengali to English using our proposed phonetic mappingconversion systemmean to emphasize the names or pronouns presenting a notion of supporting adjective or adverb.Thus, the tags have separate meanings and use in the sentences. If we do not identify these tagscorrectly and separate them from the names or pronouns, the resulting translation may become faultyas shown in Figure 3.18. In this figure, the system misinterprets the subject of the first sentence asFigure 3.18: Faulty translation due to not identifying emphasizing tagsa full-name (Rahimo) due to the omission of checking for emphasizing tags. Similarly, the systemmisinterprets the subjects of other sentences in the figure as names (Tumio, Amii). This happens asthe subject does not appear in our vocabulary due to having an emphasizing tag associated with it.Here, the translation of the first sentence actually should have been - “Rahim also eats rice”, wherethe Bengali form of ‘also’ appears as an emphasizing tag (as suffix) not recognized in the translationpresented in Figure 3.18Hence, first, we need to check the suffix of the subject (name or pronoun) for any such tagsin Bengali sentences. If we can identify any such tag, we need to separate it from the subject.Figure 3.19 illustrates the process of separating emphasizing tags with three different examples. Afterthis separation, we can translate the name as discussed earlier, and take care of the emphasizing tags(suffixes) separately as shown in Figure 3.20.Although we can apply this mechanism for name identification with emphasizing tags in ourCHAPTER 3. PROPOSED METHODOLOGY 29Figure 3.19: Separating emphasizing tags from subjectsFigure 3.20: Separating emphasizing tags from subjects and corresponding translationssystem appropriately, this may not generate the desired result in all cases. This happens as theforms of emphasizing tags can also be parts of actual names in some cases. We will focus on thispoint later with relevant examples in the next chapter. However, ignoring such possibilities of faultyidentifications of emphasizing tags only in a limited number of cases, we can apply the proposedmechanism for name identification with emphasizing tags effectively in most of the cases.3.4 Blending Rule-based Translator with NMTOur proposed rule-based translator exhibits good performance in case of smaller sentences. This scopeof rule-based translator expands as we add more rules continuously. However, it is near-to-impossibleto implement unlimited and ever changing grammatical rules for any language. Besides, it is hard todeal with rule interactions in big systems [56], grammatical ambiguities [57], and idiomatic expressions[58]. Therefore, the potential of machine translation comes to light. Machine translation (MT) is asub-field of computational linguistics that investigates the use of software to translate text or speechfrom one language to another. Recently, neural machine translation (NMT) has emerged as the mostpopular MT system since NMT is used in translation purpose by reputed organisations such as Googleand Microsoft.We have already discussed Google’s Neural Machine Translation (GNMT) system in the previousCHAPTER 3. PROPOSED METHODOLOGY 30(a) NMT followed by rule-based translation(b) Rule-based translation followed by NMT(c) Either NMT or rule-based translation depending on the type of sentenceFigure 3.21:</s>
<s>Different blending techniques between rule-based translation and NMT as explored inthis studychapter. GNMT works considerably well for translating between any pair of popular languages.However, NMT has its own major limitation in terms of generating accurate translations as shown inFigure 2.1 earlier. Thus, both rule-based translator and NMT exhibit their advantages and limitationscompared to other. This finding leads to our next investigation on blending between rule-basedtranslator and NMT.To do so, we implement the classical NMT in our system from an open source resource [50]. Then,we integrate our proposed rule-based translator with the classical NMT to investigate whether suchan integration can achieve a better performance in translation. We can explore the blending in threeCHAPTER 3. PROPOSED METHODOLOGY 31different ways:• NMT followed by rule-based translation,• Rule-based translation followed by NMT, and• Either NMT or rule-based translation depending on the type of sentence.Figure 3.21 illustrates how we can implement the possible blending approaches in our system.Besides, we present our blending approaches in Algorithm 1, 2, and 3. We discuss each of these threetechniques in the next subsections.3.4.1 NMT Followed by Rule-based TranslationClassical NMT initially requires training with parallel corpus (sentence pairs of source language andtarget language). In our case, we develop and adopt parallel corpus of different sizes containingBengali-English sentence pairs for training the NMT. After training, we feed the intended inputAlgorithm 1 Blending between rule-based translator and data-driven translator (NMT or SMT)procedure GetBlendingOutputInput : Source sentence (Bengali)Output : Target sentence (English) after blendingNMT Outout← Output generated by NMT (or SMT)RB Output← Output generated by rule-based translatorWord← An object with two attributes - token (word) and PoS tag (Parts of Speech)PoS tagger(sentence)← ArrayList of Word objects with PoS tag for each word in the sentencesrc len← Length of source sentenceNMT words← ArrayList of Word objects (words with PoS tagging) in NMTRB words← ArrayList of Word objects (words with PoS tagging) in rule-based translationTranslation NMT+RB← ArrayList of Word objects after blending (NMT followed by rule-based)Translation RB+NMT← ArrayList of Word objects after blending (RB followed by rule-based)Translation NMTorRB← ArrayList of Word objects after blending (Either NMT or rule-based)NMT words := PoS tagger(NMT Output)RB words := PoS tagger(RB Output)Translation NMT + RB := PerformBlending(NMT words,RB words)Translation RB + NMT := PerformBlending(RB words,NMT words)Translation NMTorRB := PerformBlending NMTorRB(NMT Output,RB Output, src len)displayTranslation NMT+RBdisplayTranslation RB+NMTdisplayTranslation NMTorRBsentences to the NMT and generate the output sentences translated in English using the classicalCHAPTER 3. PROPOSED METHODOLOGY 32NMT approach. In our experimentation, we consider a deep multi-layer recurrent neural network(RNN), which is unidirectional and uses LSTM as a recurrent unit [50].After getting the NMT generated translated sentence, our blending approach applies grammaticalrules on the translated sentence to further modify the sentence to improve its translation accuracy(Figure 3.21(a)). Algorithm 1 shows the skeleton of our blending approaches. Here, as discussedearlier, our system first tokenizes the source sentence to form the token-tagging table for rule-basedtranslation. Using these token tagging information, our blending system can now substitute some ofthe words or phrases in the NMT generated translated sentences with the translated words obtainedfrom our rule-based translator.</s>
<s>More specifically, rule-based translator just further ameliorates theskeleton of the translated sentence that NMT has already built as shown in Algorithm 2.Algorithm 2 Blending Module for NMT followed by rule-based approach and rule-based followed byNMT approachprocedure PerformBlendingInput : sentence1← ArrayList of Word objects in the sentence on which blending will be performedInput : sentence2← ArrayList of Word objects in the sentence with which sentence1 will be blendedOutput : Translated sentence after performing blendingsent1 len← Length of sentence1sent2 len← Length of sentence2sent1 word← A Word object in sentence1sent2 word← A Word object in sentence2blended Translation← Output sentence after performing blendingblended Translation := NULLfor i := o to i < sent1 len dosent1 word := sentence1.get(i)for j := 0 to j < sent2 len dosent2 word := sentence2.get(j)if sent1 word.token 6= sent2 word.token thenif sent1 word.PoS tag = sent2 word.PoS tag thensentence1.set(i, sentence2.get(j))sentence2.remove(j)breakblended Translation := blended Translation + “ ” + sentence1.get(i).tokenreturn blended TranslationAlgorithm 2 considers NMT generated translation and rule-based translation as ‘sentence1’ and‘sentence2’ respectively for ‘NMT followed by rule-based’ blending approach. Here, if our blendingsystem finds any pair of unmatched words (token) having the same parts-of-speech (PoS tag), thenour system replaces the NMT word with the corresponding rule-based word. This is how our systemCHAPTER 3. PROPOSED METHODOLOGY 33checks each word in the NMT generated translation with each word in the rule-based translation forreplacement. Figure 3.22 shows an example of how this blending technique works. Here, apart fromFigure 3.22: An example of NMT followed by rule-based translationgenerating translation by NMT, we also generate its rule-based translation. However, NMT translationforms the skeleton of the translated sentence. Next, our blending system matches translations fromboth the translators token by token using the token tagging table of the rule-based translator. If thesystem finds any mismatch in any position or parts of speech then it replaces the NMT generated wordin that position by the rule-based generated word exactly in the same position. Here, NMT translatesBengali name “oishee” to “Ishii” where “Ishii” takes the position of noun. However, “Oishee” takesthe same position in rule-based translation. Therefore, first, this blending technique replaces “Ishii”by “Oishee” in the final translation. Afterwards, our system also replaces “had”, “finish”, and “his”by “was”, “finishing”, and “her” respectively keeping the words in other positions intact.This technique proves itself to be the best blending technique so far, which we will illustrate in ourexperimental evaluation part later in this paper. The main reason behind this point is the fact thatthis technique realizes skeleton of translation from NMT and word-based attributes (such as person,number, tense, etc.) from rule-based translation. These two different forms of realizations best fit tostrengths of the two different translation approaches.3.4.2 Rule-based Translation Followed by NMTThis is another blending technique, which we propose and investigate. Here, we implement a reversesequence of the previous blending technique. First, we pass the source sentence to the rule-basedtranslator. Next, we further modify the translated sentence by NMT in this blending system asCHAPTER 3. PROPOSED METHODOLOGY 34shown in Figure 3.21(b).Similar to the earlier case, Algorithm 2 also illustrates this</s>
<s>blending technique. This time, oursystem considers rule-based translation as ‘sentence1’ and NMT generated translation as ‘sentence2’in Algorithm 2. Major limitation of this technique is that NMT runs completely on its own. NMT cangenerate completely wrong words in different positions in a sentence during translation since NMTalways predicts the next word in sequence using probabilities. Its performance largely depends onthe magnitude of training data. On the other hand, rule-based translator at least cannot pick wrongwords since it only searches the vocabulary for any particular word translation and pick the translatedword if found.Therefore, if this blending system further modifies the rule-based translated sentence by NMT thenit can happen that translation performance degrades in many cases. Only luck with this approachis when our rule-based translator cannot recognize the source sentence due to lack of appropriaterule-set. This point relates to one of the most common advantages of using machine translation -NMT works better for translating any random sentence (not all sentences needs to be covered byrules), and for fast and cheap translation. Figure 3.23 presents an example on how this techniqueperforms translation.Figure 3.23: An example of rule-based translation followed by NMTHere, initially, the two unmatched words, “Oishee” in rule-based and “Ishii” in NMT, hold thesame position in the translated sentences. Therefore, first, this blending approach replaces “Oishee”by “Ishii”. Afterwards, our system also replaces “was”, “finishing”, and “her” by “had”, “finish”, and“his” respectively as shown in Figure 3.23.CHAPTER 3. PROPOSED METHODOLOGY 353.4.3 Either NMT or Rule-based TranslatorThis blending technique is much simpler compared to earlier ones. It performs choosing one betweentwo translations generated by rule-based translator and NMT separately as shown in Figure 3.21(c).However, this blending system needs to make the choice based on some criteria so that it chooses thebetter one.In our system, rule-based translator works better for small sentences. More specifically, ourrule-based translation system implements rules for the sentences having smaller length (not morethan 7 words) and simpler structure so far. As we keep adding more rules, the scope of translationwill definitely grow for rule-based translator. Therefore, this blending approach chooses rule-basedtranslation if the source sentence is smaller in length. Otherwise, it chooses NMT generated translationas the output translation. We present this blending approach in Algorithm 3.Algorithm 3 Blending module for Either NMT or rule-based approachprocedure PerformBlending NMTorRBInput : sentence1← Translation generated by NMTInput : sentence2← Translation generated by rule-based translatorInput : source length← Length of source sentenceOutput : Translated sentence after performing blendingblended Translation← Output sentence after performing blendingblended Translation := NULLif source length ≤ 7 thenblended Translation := sentence2 // rule-based translationelseblended Translation := sentence1 // NMTreturn blended TranslationFigure 3.24 shows a working example of this blending technique. In the figure, we identify thesource sentence as a small sentence with only five words. Our blending system considers sentencesconsisting of less than 8 words as small sentences. Therefore, the system selects translation generatedby the rule-based translator as the final translation and ignores NMT this time. Besides, note thatwe can update the selection criteria (sentence type) in this blending system according to the</s>
<s>scope ofthe rule-based translator. The more we add rules, the more types of sentences we can translate usingrule-based translator. Therefore, selection criteria can be made much more flexible and tricky in thissystem depending on performance analysis after incorporating more rules.CHAPTER 3. PROPOSED METHODOLOGY 36Figure 3.24: An example of choosing either NMT or rule-based translationChapter 4Performance EvaluationWe perform rigorous performance evaluation of our different approaches on the basis of different typesof metrics. In this chapter, we present our experimental settings, data sets, performance metrics, andall kinds of results along with corresponding analyses.4.1 Experimental SettingsWe need to employ considerable resources for our experimentation, as such experimentation areresource-hungry and time consuming in general. We present the resources and settings we utilized inour experiments in the next subsection.4.1.1 Settings for Experimentation of Rule-based TranslatorWe use software and hardware resources for implementing our rule-based translation system. Here,we use JAVA language, Netbeans platform/IDE, Sqlite database, Opennlp tools [49], and Windows10 (64 - bit) operating system as software resources. Besides, we use Core i3 - 2310M (2.10 GHz)processor, 4.00 GB RAM, and 1 TB HDD hard disk as hardware resources.We face a major challenge regarding taking input and parsing Bengali texts in Java. To do so,initially, we set the text encoding in Netbeans to UTF-8 [15]. However, Bengali fonts and texts donot appear in Netbeans properly. Afterwards, we change the font settings (font family, font size,etc.) to finally be able to work with Bengali texts in Netbeans successfully. Besides, integration ofSqlite library [15] with Netbeans IDE for connecting database is another important feature of ourrule-based translator. Since we need a Bengali to English dictionary in the system, we require aCHAPTER 4. PERFORMANCE EVALUATION 38database (vocabulary) to retrieve the Bengali to English word translation. Additionally, for keepingother information such as token-tagging table, number table, person table, etc., we need to connecta database with our system. For this purpose, we integrate Sqlite with Netbeans by adding a jar filefor Sqlite.Regarding Bengali to English dictionary, we do not find any well-defined dictionary format sothat we can import it to our system’s database directly. Therefore, we ourselves insert a reasonableamount of words in our database.4.1.2 Setting of Experimentation with NMTTo perform our experimentation with NMT, we utilize TensorFlow in our system. Specifically, weinstall TensorFlow version 1.4.2 using Python’s pip package manager having Ubuntu 16.04 as theoperating system. We pull the source code of NMT from Github to our system by running the com-mand - “git clone https://github.com/tensorflow/nmt/”. Here, we use Python language, PyCharmplatform/IDE, Tensorflow library, and Linux (64 - bit) operating system. Besides, we use Core i3 -2310M (2.10 GHz) processor, 4.00 GB RAM, and 1 TB HDD hard disk as hardware resources.To start the experimentation, we design datasets for training NMT and testing its performance.We use the following hyper-parameters in our system for training NMT with our designed datasets- 1) 12000 training steps, 2) 2 hidden layers, 3) 20% dropout rate, and 4) 100 steps per statistics.We choose these hyper-parameters based on the benchmarks achieved for English-Vietnamese andGerman-English translation as</s>
<s>claimed in [50].4.2 DatasetsDesigning and developing datasets has been one of the most challenging and time intensive tasks inour experimentation. For training the NMT reasonably, we require a large parallel corpus containingboth source language and target language. In our case, NMT requires such a corpus of Bengali-Englishsentence pairs. However, we find very few sources available for constructing a reasonable sized datasetcontaining Bengali-English sentence pairs.CHAPTER 4. PERFORMANCE EVALUATION 394.2.1 Demography of DatasetsWe create the corpus at our own by translating different Bengali sentences to English one by one.We develop our dataset of Bengali-English parallel corpus from well-established contents such as Al-Quran [52], newspapers [53], movie subtitles [54], and university websites [55]. Besides, we translatedifferent example-based individual Bengali sentences into English and accumulate them in the dataset.Figure 4.1(a) illustrates a demography of our full dataset. Initially, we experiment with only literature-based source (Al-Quran) of our full dataset since its size is large enough to be considered as a separatedataset when compared to the size our full dataset. Afterwards, we also experiment with our fulldataset with an intent to generate results from a fairly diversified dataset. Therefore, our full datasetalso includes another dataset (custom dataset) as its subset (excluding the literature-based dataset).However, we do not consider using this custom dataset independently in our experimentation sinceits size is too small to train an NMT system reasonably. We present a demography of our customdataset (a subset of full dataset) in Figure 4.1(b). Additionally, we also perform translations overindividual sentences and analyze their outcomes.(a) Full dataset (b) Custom dataset (subset of the full dataset)Figure 4.1: Demography of our datasetsThere is another dataset containing more than 1 million Bengali-English parallel sentences, whichis made available in a website called ‘GlobalVoices’ [61]. However, the sentences in this datasetcontain numerous unknown characters and words (even from other languages such as Arabic, Chinese,German, etc.), which needs to be cleaned first before using in experimentation. Therefore, we carefullyremove such unknown characters from this dataset. Besides, there are English sentences that are notCHAPTER 4. PERFORMANCE EVALUATION 40proper translations of corresponding Bengali sentences in this dataset. Therefore, this dataset requiresrigorous manual checking and corrections for each sentence pair. Table 4.1 shows summary of thedifferent datasets.Dataset Number ofsentencesSources Used inexperimentation?Literature-based 8,000 Al-Quran YesCustom 3,500 Newspaper, subtitles,websites, etc.Blended in fulldatasetFull (combined) 11,500 Literature-based andcustom datasetYesGlobalVoices 10,31,725 Website YesTable 4.1: Summary of the different datasets4.2.2 Individual SentencesWe design individual sentences mainly for testing the performance of our rule-based translator afterintegrating different rules. This requires having different categories of sentences from the sourcelanguage. In our case, we consider 540 individual Bengali sentences for translation, which coverdifferent categories (rules) of sentences. To do so, we collect 540 individual sentences. Figure 4.2shows some examples on how we choose different categories of sentences. For example, the firstsentence in the figure is an example of a simple present tense. The second sentence and the thirdsentence refer to present continuous tense and simple past tense respectively. The last sentence is anexample of a complex sentence.Figure 4.2: Individual sentences for evaluating translations by our</s>
<s>rule-based translatorCHAPTER 4. PERFORMANCE EVALUATION 414.2.3 Literature-based DatasetUnlike rule-based translator, we require a large parallel corpus of Bengali-English sentence pairs inNMT. Therefore, we develop our literature-based dataset keeping NMT as the prime focus. It is achallenging task to collect and compile a large parallel corpus using Bengali literature, as most ofthe translations of Bengali literature books are available as scanned copies that are not editable. Inthis regard, we find Al-Quran (the holy Islamic book) to be available as parallel corpus consistingof Bengali-English sentence pairs. Therefore, we adopt Al-Quran as a source of our literature-baseddataset, which contains around 8,000 Bengali-English sentence pairs. Figure 4.3 and Figure 4.4 showsnippets of our Bengali and English datasets respectively extracted in this manner. Note that it isnot ideal to consider only Al-Quran as a source for Bengali-English translation for two reasons - 1)Most of the source sentences are tough for a machine (or even for a human) to realize and process,and 2) Translations are relatively complex here to some extent.In addition to our dataset, we need to provide vocabulary files of both source and target languagesfor predicting words during generating translations by NMT. There are two separate vocabulary filesfor Bengali and English, which we generate from Bengali and English sentences respectively. These filescontain one unique token (word) per line. Besides, NMT needs the words to be sorted in descendingorder according to their frequency (number of appearances) in the whole corpus. Figure 4.5 andFigure 4.6 show snippets of vocabulary files of Bengali and English respectively.Another point is that each vocabulary file should begin with three special tokens as shown inFigure 4.7. Here, 1) “unk” refers to replacing the unknown word translations as “unk”, 2) “s” refersto a starting symbol “s” (“tgt sos id” in our code) enabling the decoding (translation) process tobe started as soon as the decoder receives this symbol, and 3) “/s” refers to an output symbol(“tgt eos id” in our code) enabling the translation process to be continued until this end-of-sentencemarker “/s”.CHAPTER 4. PERFORMANCE EVALUATION 42.......Figure 4.3: Partial Bengali literature-based dataset (extracted from Al-Quran)CHAPTER 4. PERFORMANCE EVALUATION 43.......Figure 4.4: Partial English literature-based dataset (extracted from Al-Quran)CHAPTER 4. PERFORMANCE EVALUATION 44Figure 4.5: Partial Bengali vocabularyFigure 4.6: Partial English vocabularyCHAPTER 4. PERFORMANCE EVALUATION 45Figure 4.7: Beginning of vocabulary files4.2.4 Custom DatasetLiterature-based dataset from Al-Quran has a considerable size (around 8,000 sentence pairs) that canbe used for training the NMT. However, sentences of Al-Quran might not be considered as standardenough to represent a language. Nonetheless, Al-Quran can also be quite complex for a machine torecognize and process. Therefore, we develop another dataset with more usual and realizable sentencesrepresenting both the source and the target languages for the purpose of training the NMT better.Major sources of this custom dataset are newspaper articles, movie subtitles, websites, etc. How-ever, we cannot import any of such existing sources directly as Bengali-English parallel corpus. Eachsource (Bengali) sentence from the newspapers or the subtitles requires manual checking and edit-ing in generating the parallel target (English) sentence. We perform this at our own</s>
<s>to develop thecustom dataset, which is presented in Appendix. Size of our custom dataset is around 3,500 parallelsentences, which is so small that it cannot train an NMT system reasonably. Therefore, we do notCHAPTER 4. PERFORMANCE EVALUATION 46Figure 4.8: Partial Custom datasetuse this dataset independently in our experimentation. We present a snippet of our custom datasetin Figure 4.8.CHAPTER 4. PERFORMANCE EVALUATION 474.2.5 Full DatasetOur literature-based dataset consists of sentences only from the holy Al-Quran, whereas our customdataset is not large enough to be considered for training an NMT system. Therefore, we combine ourcustom dataset with our literature-based dataset for experimenting with a larger and more diversifieddataset. Thus, our full dataset (combined with literature-based dataset and custom dataset) consistsof around 11,500 Bengali-English sentence pairs from different sources such as Al-Quran, newspaperarticles, movie subtitles, university websites, etc. Besides, both Bengali and English sentences in ourfull dataset vary in size or length. Figure 4.9 reflects percentages (%) of different types of sentences inour full dataset in terms of different sizes or lengths. In addition to that, we also generate necessaryvocabulary files for our full dataset similar to what we have done in the case of literature-baseddataset.(a) Bengali sentences (b) English sentencesFigure 4.9: Percentages of sizes of sentences in the full dataset4.2.6 GlobalVoices DatasetData-driven translators (NMT or SMT) require significant amount of training data. However, ourfull dataset contains up to 11,500 parallel Bengali-English sentences, which represents the context oflow-resource language. Therefore, we develop a larger Bengali-English parallel corpus containing morethan one million sentence pairs to extend our experimentation to a high-resource context. Figure 4.10reflects percentages (%) of different types of sentences in this dataset in terms of different sizes orlengths.CHAPTER 4. PERFORMANCE EVALUATION 48(a) Bengali sentences (b) English sentencesFigure 4.10: Percentages of sizes of sentences in the GlobalVoices dataset4.2.7 Representativeness in Our DatasetsWe analyze the representativeness of our dataset using Zipf’s law [51]. Zipf’s law pertains to frequencydistribution of words in a language (or a dataset of the language, which is large enough to be arepresentative of the language). To illustrate Zipf’s law, let we have a dataset and let there be Vunique words in the dataset. For each word in the dataset, we compute how many times the wordoccurs in the dataset. We refer this as Freq(word). Then, we rank the words (Rank(word)) indescending order of their frequencies. Let r be the rank of a word and Prob(r) be the probability of aword at rank r. By definition, Prob(r) = freq(r)/N , where freq(r) is the number of times the word atrank r appears in the dataset. Besides, N is the total number of words in the dataset. Zipf’s law statesthat r × Prob(r) = A, where A is a constant that we should empirically determine from the dataset.Taking into account that Prob(r) = freq(r)/N , we can rewrite Zipf’s law as r × freq(r) = A×N .To demonstrate that Zipf’s law holds in our dataset, we compute freq(r) that involves computingfrequency and ranking of each word. Then, we compute r × freq(r)</s>
<s>to check whether r × freq(r)becomes approximately a constant in all cases. The simplest way to show that Zipf’s law holds ina dataset is to plot the computed values and check whether the slope is proportionately downward.Here, instead of plotting freq(r) versus rank, it is better to plot log(r) in the X axis and log(freq(r))in the Y axis [51]. Accordingly, we plot the computed values for both Bengali corpus and Englishcorpus separately in two different graphs.We present the graphs for our first dataset (literature-based dataset) in Figure 4.11(a) and Fig-CHAPTER 4. PERFORMANCE EVALUATION 49(a) Results in Bengali corpus (b) Results in English corpusFigure 4.11: Representativeness in our literature-based dataset according to Zipf’s lawure 4.11(b) respectively. Figure 4.11(a) shows that our Bengali corpus exhibits a bit deviation fromZipf’s law; however, our English corpus perfectly follows Zipf’s law. Similarly, we present the graphsfor our second dataset (full dataset) in Figure 4.12(a) and Figure 4.12(b) respectively. Here, Fig-ure 4.12(a) shows that our Bengali corpus of full dataset exhibits lesser deviation from Zipf’s lawthan Bangali corpus of literature-based dataset due to combining literature-based dataset with cus-tom dataset.(a) Results in Bengali corpus (b) Results in English corpusFigure 4.12: Representativeness in our full dataset according to Zipf’s lawCHAPTER 4. PERFORMANCE EVALUATION 504.3 Evaluation MetricsHuman evaluations of machine translation are extensive, however, expensive. Human evaluations cantake substantial time to finish. Therefore, we need to adopt a quick method of automatic evaluationof machine translation, which can correlate highly with human evaluation. Accordingly, for thepurpose of performance evaluation of our system, we adopt three different metrics that are widelyused for evaluating performance of machine translation. The metrics are - 1) Bi-Lingual EvaluationUnderstudy (BLEU) [19], 2) Metric for Evaluation of Translation with Explicit ORdering (METEOR)[20], and 3) Translation Edit Rate (TER) [21]. We present brief overview on each of these metrics inthe following subsections.4.3.1 BLEUBLEU presents an automated understudy to skilled human judges, which substitutes for them in caseof a need for quick or frequent evaluations [19]. “The closer a machine translation is to a professionalhuman translation, the better it is” - this is the theme of this method. Typically, there can bemany “perfect” translations of a given source sentence. These translations may vary in word choiceor in word order even when they use the same words. Yet, humans can clearly distinguish a goodtranslation from a bad one. For example, let us consider two candidate translations of a sourcesentence in Example 1.Example 1.• Candidate 1: “It is a guide to action, which ensures that the military always obeys thecommands of the party.”• Candidate 2: “It is to insure the troops forever hearing the activity guidebook that partydirect.”Although they appear to convey the same meaning, they differ markedly in quality. For comparison,we state three reference human translations of the same sentence below.• Reference 1: “It is a guide to action that ensures that the military will forever heed Partycommands.”• Reference 2: “It is the guiding principle, which guarantees the military forces always beingunder the command of the Party.”CHAPTER 4. PERFORMANCE EVALUATION</s>
<s>51• Reference 3: “It is the practical guide for the army always to heed the directions of the party.”It is clear that the good translation, Candidate 1, shares many words and phrases among these threereference translations, while Candidate 2 does not. Besides, note that Candidate 1 shares “It is a guideto action” with Reference 1, “which” with Reference 2, “ensures that the military” with Reference1, “always” with References 2 and 3, “commands” with Reference 1, and finally “of the party” withReference 2 (all ignoring capitalization). In contrast, Candidate 2 exhibits far fewer matches, andtheir extent is less.It is clear that an automated program can rank Candidate 1 higher than Candidate 2 simply bycomparing n-gram1 matches between each candidate translation and the reference translations. Here,BLEU compares n-grams of the candidate with the n-grams of the reference translation and countthe number of matches. These matches are position independent. The more the matches, the betterthe candidate translation is.To calculate BLEU score, we need to calculate the modified n-gram precision for the entire testcorpus initially. To do so, first, we count the maximum number of times a candidate n-gram occurs inany single reference translation. Note that we compute these n-gram matches sentence by sentence.Next, we clip the total count of each candidate n-gram by its maximum reference count . In otherwords, we truncate each n-gram’s count, if necessary, to not exceed the largest count observed in anysingle reference for that n-gram. Let us consider another example in this regard.Example 2.• Candidate: “the the the the the the the”• Reference 1: “the cat is on the mat”• Reference 2: “there is a cat on the mat”In the example above, the unigram (n=1) “the” appears twice in reference 1, and once in reference2. Thus, maximum reference count for the unigram is 2, whereas its total count in the candidatesentence is 7. Therefore, we clip its total count (7) by its maximum reference count (2).Then, we sum up these clipped counts over all distinct n-grams for all the candidate sentences,and divide the summation by the total (unclipped) number of candidate n-grams in the test corpus.1An n-gram is a contiguous sequence of n items from a given dataset. The items can be phonemes, syllables, letters,words, etc., according to the application. For example, if the sample sentence is - “This is an example”, corresponding1-grams (unigrams) are - “This”, “is”, “an”, and “example”, corresponding 2-grams (bigrams) are - “This is”, “is an”,and “an example”, and so on.CHAPTER 4. PERFORMANCE EVALUATION 52Therefore, we can calculate the modified n-gram precision score, Pn for the entire test corpus asfollows:Pn =C∈Candidatesn-gram∈C Countclip(n-gram)∑C′∈Candidatesn-gram∈C′ Count(n-gram′)(4.1)In Example 1, if we consider Candidate 1 is the only candidate sentence in the entire corpus thenCandidate 1 (corpus) achieves a modified unigram precision of 17/182. Similarly, Candidate 2 achievesa modified unigram precision of 8/14. Similarly, the modified unigram precision in Example 2 is 2/7.Besides, Candidate 1 achieves a modified bigram (n=2) precision of 10/17, whereas the lower qualityCandidate 2 achieves a modified bigram precision of 1/13. In Example 2, the</s>
<s>candidate sentenceachieves a modified bigram precision of 0.We penalize candidate translations longer than their references using the modified n-gram precisionmeasure. Here, we introduce a multiplicative brevity penalty factor so that a high-scoring candidatetranslation must now match the reference translations in length, in word choice, and in word order.We also calculate the brevity penalty over the entire corpus to allow freedom at the sentence level.To do so, first, we compute the test corpus’ effective reference length, r, by summing the best matchlengths for each candidate sentence in the corpus. Next, we choose the brevity penalty through adecaying exponential as r/c, where c is the total length of the candidate translation corpus. We cancalculate the brevity penalty (BP) as follows [19]:BP =1, if c > r.e(1−r/c), otherwise.(4.2)Finally, we calculate the BLEU score for the entire test corpus using the following formulas [19]:BLEU = BP × expn=1wn log pn(4.3)logBLEU = min1− r, 0n=1wn log pn (4.4)2In Candidate 1, there are sixteen distinct unigrams - “It”, “is”, “a”, “guide”, “to”, “action,”, “which”, “ensures”,“that”, “the”, “military”, “always”, “obeys”, “commands”, “of”, “party”. Here, “the” has clipped count 3, “obeys” has0, and each of the other unigrams has clipped count 1 contributing to total clipped count 17. Besides, total number ofcandidate unigrams is 18. Therefore, modified unigram precision is 17/18.CHAPTER 4. PERFORMANCE EVALUATION 53Here, we consider wn’s as the positive weights summing to one. In the baseline, we have chosenN=4 and uniform weights as wn=1/N.4.3.2 METEORMETEOR is an automatic metric for machine translation evaluation that is based on a generalizedconcept of unigram matching between machine produced candidate translation and human producedreference translations [20]. METEOR can match unigrams based on their surface forms, stemmedforms, and meanings. Furthermore, we can easily extend METEOR to include more advanced match-ing strategies. Once Meteor finds all generalized unigram matches between the two strings, it computesa score for this matching using a combination of unigram-precision3, unigram-recall4, and a measureof fragmentation that is designed to directly capture how well-ordered the matched words in themachine translation are with respect to the reference.METEOR evaluates a translation by computing a score based on explicit word-to-word matchesbetween the translation and a reference translation. If more than one reference translation is available,METEOR scores the given translation against each reference independently, and reports the best score.Given a pair of translations (a candidate sentence and a reference sentence) to be compared,METEOR creates an alignment between the two strings (translations). We define an alignment as amapping between unigrams, such that every unigram in each string maps to zero or one unigram inthe other string, and to no unigram in the same string. Thus, in a given alignment, a single unigramin one string cannot map to more than one unigram in the other string. We incrementally producethis alignment through a series of stages, each stage consisting of two distinct phases.In the first phase, we list all the possible unigram mappings between the two strings. Thus, forexample, if the word“computer” occurs once in the system translation and twice in the referencetranslation, we list two</s>
<s>possible unigram mappings - one mapping the occurrence of “computer” inthe system translation to the first occurrence of “computer” in the reference translation, and anothermapping it to the second occurrence. Here, different modules map unigrams based on different criteria.The “exact” module maps two unigrams if they are exactly the same (e.g. “computers” maps to“computers” but not “computer”). The “porter stem” module maps two unigrams if they are the3Unigram-precision is calculated as a ratio between the number of unigrams in the candidate translation that arealso found in the reference translation and the total number of unigrams in the candidate translation.4Unigram-recall is calculated as a ratio between the number of unigrams in the candidate translation that are alsofound in the reference translation and the total number of unigrams in the reference translation.CHAPTER 4. PERFORMANCE EVALUATION 54same after they are stemmed using the Porter stemmer (e.g.: “computers” maps to both “computers”and to “computer”). The “WN synonymy” module maps two unigrams if they are synonyms of eachother (e.g.: “well” maps to “good”).In the second phase of each stage, we select the largest subset of these unigram mappings suchthat the resulting set constitutes an alignment as defined above (that is, each unigram must map toat most one unigram in the other string). If more than one subset constitutes an alignment, and alsohas the same cardinality as the largest set then we select the set that has the least number of unigrammapping crosses as shown in Figure 4.13.(a) (b)Figure 4.13: Unigram mappings between a candidate sentence and a reference sentenceHere, we choose the unigram mapping of Figure 4.13(a) over that of Figure 4.13(b) as Fig-ure 4.13(a) has the least number of unigram mapping crosses. Formally, two unigram mappings(ti, rj) and (tk, rl) (where ti and tk are unigrams in the system translation mapped to unigrams rjand rl in the reference translation respectively) are said to cross if and only if the following formulaevaluates to a negative number:((pos(ti)− pos(tk))× (pos(rj)− pos(rl))) (4.5)Here, pos(tx) is the numeric position of the unigram tx in the system translation string, and pos(ry)is the numeric position of the unigram ry in the reference string.Each stage only maps unigrams that we have not mapped to any unigram in any of the precedingstages. Generally, the first stage uses the “exact” mapping module, the second the “porter stem”module and the third the “WN synonymy” module. Once we have run all the stages and produceda final alignment between the candidate translation and the reference translation, we compute theMETEOR score for this pair of sentences as follows. Firstly, we compute unigram precision (P)as the ratio of the number of unigrams in the system translation that are mapped (to unigrams inthe reference translation) to the total number of unigrams in the system translation. Similarly, weCHAPTER 4. PERFORMANCE EVALUATION 55compute unigram recall (R) as the ratio of the number of unigrams in the candidate sentence thatare mapped (to unigrams in the reference sentence) to the total number of unigrams in the referencetranslation. Let us the consider a candidate sentence</s>
<s>and a reference translation in the followingexample.Example 3.• Candidate: on the mat sat the cat• Reference: the cat sat on the matIn the above example, the number of mapped unigrams in the candidate sentence is 6, and the totalnumber of unigrams in the candidate sentence is 6. Therefore, unigram precision, P is 1 (6/6). Besides,the total number of unigrams in the reference sentence is 6. Therefore, unigram recall, R is also 1(6/6). Next, we compute Fmean by combining the precision and recall via a harmonic-mean thatplaces most of the weight on recall. The resulting formula used is -Fmean =10PRR + 9P(4.6)To take into account longer matches, we calculate a penalty for a given alignment using the followingformula:Penalty = 0.5×# of chunks# of unigrams matched(4.7)For example, if the candidate sentence is “the president spoke to the audience” and the referencesentence is “the president then spoke to the audience”, there are two chunks - “the president” and“spoke to the audience”. Similarly, in our example (Example 3), there are six chunks (no bigram orlonger matches) - “on”, “the”, “mat”, “sat”, “the”, and “cat”. We need to note that the penaltyincreases as the number of chunks increases to a maximum of 0.5. As the number of chunks goes to1, penalty decreases. Finally, we compute the METEOR score for the chosen alignment as follows:Score = Fmean × (1− Penalty) (4.8)This has the effect of reducing the Fmean by 50% maximum (Penaltymax = 0.5) if there are nobigram or longer matches. For example, we calculate the METEOR score for Example 3 as follows:Fmean =10× 1× 11 + 9× 1= 1.00CHAPTER 4. PERFORMANCE EVALUATION 56Penalty = 0.5×= 0.50Score = 1.00× (1− 0.50) = 0.504.3.3 TERTranslation Edit Rate (TER) measures the amount of editing that a human would have to performto change a system output so that it exactly matches a reference translation [21]. Formally, we defineTER as the minimum number of edits (normalized by the average length of the references) needed tochange a candidate sentence so that it exactly matches one of the references. Since the main concernis the minimum number of edits needed to modify the candidate, we measure only the number of editsto the closest reference. Specifically, we can calculate TER as follows:TER =Number of editsAverage number of reference words(4.9)Possible edits include insertion, deletion, and substitution of single words as well as shifts of wordsequences. A shift moves a contiguous sequence of words within the candidate sentence to anotherlocation within that sentence. All edits, including shifts of any number of words by any distance,have equal cost [21]. In addition to that, we treat punctuation tokens as normal words and countmiscapitalization as an edit. For example, let us consider the reference-candidate pair below, wherewe indicate the differences between reference and candidate by upper case.• Reference: SAUDI ARABIA denied THIS WEEK information published in the AMERICANnew york times.• Candidate: THIS WEEK THE SAUDIS denied information published in the new york times.Here, the candidate sentence is fluent and means the same thing (except for missing “American”) asthe reference</s>
<s>sentence. However, TER does not consider this an exact match. First, the phrase “thisweek” in the candidate is in a shifted position (at the beginning of the sentence rather than after theword “denied”) with respect to the reference. Second, the phrase “Saudi Arabia” in the referenceappears as “the Saudis” in the candidate (this counts as two separate substitutions). Finally, theword “American” appears only in the reference. If we apply TER to this candidate and reference, thenumber of edits is 4 (1 shift, 2 substitutions, and 1 insertion), giving a TER score of 4/13=31%.CHAPTER 4. PERFORMANCE EVALUATION 57We calculate the number of edits for TER in two phases. In the first phase, we use a greedysearch5 to find the set of shifts, by repeatedly selecting the shift that most reduces the number ofinsertions, deletions, and substitutions, until no more beneficial shift remains. In the next phase, weuse dynamic programming to optimally calculate the remaining edit distance using a minimum-edit-distance (where each insertion, deletion, or substitution has a cost of 1) [21]. We calculate the numberof edits for all of the references, and take the best (lowest) score.The greedy search is necessary to select the set of shifts because an optimal sequence of edits (withshifts) is very expensive to find. We use several other constraints in order to further reduce the spaceof possible shifts and to allow for efficient computation. These constraints are intended to simulatethe way in which a human editor might choose the words to shift. They are as follows:1. The shifted words must exactly match the reference words in the destination position.2. The word sequence of the candidate in the original position and the corresponding referencewords must not exactly match. This prevents the shifting of words that are currently correctlymatched.3. The word sequence of the reference that corresponds to the destination position must be mis-aligned before the shift. This prevents shifting to align the words that already correctly aligned.As an example, let us consider the following reference-hypothesis pair:Reference: a b c d e f cCandidate: a d e b c fHere, we can shift the words “b c” in the candidate to the left to correspond to the words “b c” inthe reference, because there is a mismatch in the current location of “b c” in the candidate, and thereis a mismatch of “b c” in the reference. After the shift the candidate changes to as follows:Reference: a b c d e f cCandidate: a b c d e fTER, as defined above, only calculates the number of edits between the best reference and thecandidate. If we use TER in the case of multiple references, it most accurately measures the errorrate of a candidate sentence when the corresponding reference is the closest possible reference to thecandidate.5Since the solution to this is conjectured to be NP-hard, a greedy search is used here [21].CHAPTER 4. PERFORMANCE EVALUATION 584.4 Experimental Results and FindingsBased on the above-mentioned performance metrics, we evaluate performances of our proposed tech-niques through rigorous experimentation. We present results and</s>
<s>finding of the evaluation in the nextsubsections.4.4.1 Results from Our Proposed Rule-based TranslatorIn our rule-based translator, we consider all types of sentences covering basic simple, complex, andcompound sentences. Initially, we implement simple sentences in our system through adding somebasic rules for forming a simple sentence. Figure 4.14 and Figure 4.15 show a couple of glimpses ofsample outputs from our implementation for translating simple sentences using JAVA.Figure 4.14: Sample translation of simple sentences (simple past tense)In Figure 4.14, we show translation of a sentence in simple past tense. Here, our rule-basedtranslation system analyze the input Bengali sentence and synthesizes it. Here, first, our systemdetermines the subject and recognizes the subject as a pronoun. Then, our system identifies theobject as a noun. Next, our system identifies the verb and analyzes carefully to properly detect thetense. Besides, our system also recognizes person and number of the subject. Moreover, analyzing thesuffix of the verb, our system detects the tense. Apart from this, our system can generate translationsof all the words (subject, object, and verb) in the input sentence from its vocabulary as shown in theCHAPTER 4. PERFORMANCE EVALUATION 59Figure 4.15: Sample translation of simple sentences (simple future tense)figure. Later, it modifies the verb by adding suffixes according to the tense. This is how our systemgenerates the final output (translated sentence). Similarly, Figure 4.15 shows another example oftranslating a sentence in simple future tense.Now, we focus on a complex sentence in Figure 4.16. Here, first, our system recognizes the complexFigure 4.16: Sample translation of a complex sentencesentence by examining the presence of any keyword of complex sentence in the input sentence. Next,our system splits the complex sentence into two independent simple sentences. Then, our systemCHAPTER 4. PERFORMANCE EVALUATION 60translates these simple sentences using the same procedure used for translating simple sentencesas discussed earlier. Finally, our system combines the translated simple sentences with necessarykeywords to generate the target translated sentence.We test our rule-based translator with different types of sentences (from our datasets), whichrealize a number of different rules. We show a scenario of our experimental results in Table 4.2,Table 4.3, and Table 4.4. Here, we summarize some of our implemented rules and generated outputs.Table 4.2 shows a partial list of our implemented rules used for translating simple sentences. Table 4.3and Table 4.4 show some of our implemented rules and some examples of generated translations forcomplex and compound sentences respectively.Table 4.2: Experimental results for some example simple sentencesCHAPTER 4. PERFORMANCE EVALUATION 61Table 4.3: Experimental results for some example complex sentencesTable 4.4: Experimental results for some example compound sentencesCHAPTER 4. PERFORMANCE EVALUATION 624.4.2 Results on Name IdentificationNext, we present outcomes of our name identification mechanism and corresponding translations. Asdiscussed earlier, one of the major improvements by our rule-based translator is name identificationand name translation.Figure 4.17: Sample outputs of name identifications and translating namesFigure 4.17 shows translation of two names - ‘Sohan’ and ‘Oishee’, generated by our system.Besides, our system can also process subjects with emphasizing tags after separating the emphasizingtags as shown in Figure 4.18. However, one important</s>
<s>observation regarding emphasized subjectidentification is that the emphasizing tag itself may be the part of a valid name (subject) as mentionedearlier in the previous chapter (Chapter 3). In such (less frequent) cases, our system discards thattag (which is not actually any tag) from the valid name leading to a faulty name identification. InFigure 4.18, our system misinterprets ‘Romio’ by removing the tag and thus reducing it to ‘Romi’.Note that such cases are quite rare and less contributing to performance compared to the overallimprovement achieved in most of the cases. Therefore, we tolerate this shortcoming in our system asa trade-off, and leave its solution as a future work.CHAPTER 4. PERFORMANCE EVALUATION 63Figure 4.18: Processing of subjects with emphasizing tags4.4.3 Results on Optimized Verb Translation TechniqueIn case of our optimized verb identification technique, modified Levenshtein distance calculation showssignificant improvement in terms of optimizing both memory consumption and searching time. Weapply this algorithm on several Bengali verbs to get their root verbs, which we then map to standardforms of the verbs. We present outcomes of our modified Levenshtein distance algorithm in Figure 4.19.Note that, detection of root verb by calculating Levenshtein distance can be incorrect for someforms of verbs, which can lead to incorrect translations of those verbs. In Figure 4.19, we cannotice one such case where our first modification of the Levenshtein distance algorithm provideserroneously translated verb ‘eat’ in place of ‘play’. We handle such erroneous cases successfullythrough preprocessing of the verbs before determining the root verbs as discussed earlier in Chapter 3.Figure 4.20 shows the improvement achieved (inside green box) after incorporating the preprocessingof verbs before calculating Levenshtein distance. Here, we overcome the incorrect detection of rootverbs shown previously in Figure 4.19 and accomplish detection of correct root verbs for almost all thepossible cases. Afterwards, our system translates the verb by modifying its raw translated form as perother relevant information (POS tagging, person, number, etc.) extracted from the input sentence.CHAPTER 4. PERFORMANCE EVALUATION 64Figure 4.19: Outcomes of our first modification on Levenshtein distance algorithmCHAPTER 4. PERFORMANCE EVALUATION 65Figure 4.20: Further improvement over modified Levenshtein distance through removing commonsuffixes4.4.4 Overall Improvement with Name Identification and Optimized Verb Trans-lation TechniqueWe present the improvement achieved by our rule-based translator after implementing name identifi-cation and optimized verb translation technique in terms of BLEU score in Table 4.5. Table 4.5 showsthe improvement achieved using our both individual sentences (540 sentences) and full dataset (11,500sentences). Here, we achieve significant improvement using our individual sentences as these sentencesare designed based on the set of rules implemented in our rule-based translator. As discussed earlier(in Chapter 3), we extract individual sentences mainly for testing the performance of our rule-basedtranslator so that they remain in-line with the different rules implemented in our rule-based translator.CHAPTER 4. PERFORMANCE EVALUATION 66Dataset Rule-based Improved Rule-based ImprovementIndividual sentences 71.28 92.36 30%Full 3.05 3.13 3%Table 4.5: Improvement with name identification and optimized verb translation technique in termsof BLEU score4.4.5 Comparison with Google TranslatorWe compare the performance of our rule-based translator with that of popular Google Translator. Inthe</s>
<s>comparison, we present that our rule-based translator performs better than Google Translator incase of sentences whose rules have already been implemented in our system so far. We show examplesof such improvements achieved by our rule-based translator over Google Translator in Table 4.6.Table 4.6: Comparison between performances of our rule-based translator and Google Translator forsome example sentences(a) Example sentence 1 (b) Example sentence 2(c) Example sentence 3 (d) Example sentence 4Figure 4.21: Snapshots of translations generated by Google Translator for our example sentences inTable 4.5 (collected on or before August 30, 2019)CHAPTER 4. PERFORMANCE EVALUATION 67We show all of these translations generated by Google Translator [45] for our example sentencesin Figure 4.21.4.4.6 Results from Our Different Blending ApproachesAfter implementation of both our proposed rule-based translator and classical NMT, we blend betweenthese two approaches using three different techniques as discussed earlier. We analyze performances ofeach of these approaches with three standard metrics namely BLEU, METEOR, and TER as presentedearlier. We consider different types of datasets with different sizes for analyzing the performances.This is because, results obtained from only one dataset may not be enough to draw any convincingconclusion about performance of translation by our proposed different approaches. Therefore, asalready mentioned, we adopt a literature-based dataset (from Al-Quran) and create another datasetfrom different sources except any literature. The latter dataset, i.e., our custom dataset, is relativelysmaller in size (around 3,500 parallel sentence pairs), which is too small to train an NMT systemreasonably. Therefore, we combine this dataset with our literature-based dataset to form anotherdataset (full dataset) for experimentation.4.4.6.1 Results using Literature-based DatasetFirst, we present results (scores of performance metrics) obtained from translation over our literature-based dataset in Table 4.7.Score NMT Rule-based NMT+rule-basedRule-based+NMTNMT orrule-basedBLEU 8.56 1.28 11.43 0.84 8.80METEOR 12.34 13.50 20.31 10.62 12.43TER 93.73 93.90 85.09 96.62 93.50Table 4.7: Comparison among different translation approachesTable 4.7 shows a comparison among all the approaches (in isolation and in combination) usingthe standard performance metrics. Here, the higher the METEOR score and the BLEU score, and thelower the TER score; the better the performance is. From Table 4.7, we notice that ‘NMT followed byrule-based’ (NMT+rule-based) blending technique exhibits significant improvement over the classicalNMT (GNMT). More specifically, it emerges as the best blending technique that gets reflected in theperformance scores using each of the three metric. Therefore, we can understand that our blendingCHAPTER 4. PERFORMANCE EVALUATION 68approaches can significantly improve performance of NMT generated translations. The best way ofblending appears to be applying grammatical rules after translating by NMT.Another blending technique, ‘either NMT or rule-based’ (NMT or rule-based), also shows slightimprovement over the classical NMT. We actually anticipate that since we carefully choose the bestbetween the translations from rule-based translator and NMT as per the types of sentences in thistechnique. However, performance scores decline in ‘rule-based followed by NMT’ (rule-based+NMT)blending approach, which points out the inability of NMT to further improve translations done bythe rule-based translation. Table 4.8 reflects a closer look at BLEU scores of all the approaches asper consideration of different n-grams (n=1, 2, 3, and</s>
<s>4). Ideally, BLEU score is considered for n-gram model where n=4. In all cases including n=4, the blending of ‘NMT followed by rule-based’outperforms all other alternatives.n-gram NMT Rule-based NMT+rule-based Rule-based+NMTNMT orrule-based1-gram 31.46 31.74 46.07 25.59 31.352-gram 17.18 10.70 25.19 7.56 17.373-gram 11.42 3.70 16.04 2.60 11.664-gram 8.56 1.28 11.43 0.84 8.80Table 4.8: Comparison as per BLEU scoresNext, we present comparisons over classical NMT and other approaches graphically to portray theindividual performance scores for each of the test sentences. We show comparisons using METEORand TER scores where light red lines indicate NMT score and deep blue lines indicate each one ofthe other approaches one by one. Here, we adopt NMT as the benchmark approach in all the graphs,as it is commonly adopted by the widely-used Google translator. Note that we do not show anycomparison in terms of BLEU score at sentence level as BLEU is generally calculated over the entiretest corpus.Comparison between NMT and Only Rule-based ApproachFirstly, Figure 4.22 and Figure 4.23 show a comparison between NMT and only rule-based ap-proach (deep blue lines) in isolation in terms of METEOR and TER scores respectively. These twofigures reflect that the overall performance of only rule-based approach is worse than NMT in isolationfor literature-based dataset.CHAPTER 4. PERFORMANCE EVALUATION 69Figure 4.22: NMT versus only rule-based METEOR scoreFigure 4.23: NMT versus only rule-based TER scoreComparison between NMT and ‘NMT followed by Rule-based’ ApproachNext, we show the performance of one of our blending techniques, ‘NMT followed by rule-based’(NMT+rule-based), in terms of METEOR and TER scores in Figure 4.24 and Figure 4.25 respec-tively. Actually, these two figures reflect the performance of our best blending technique in terms ofMETEOR and TER scores. Here, deep blue lines indicate the scores obtained using ‘NMT followed byrule-based’ blending approach. We can see significant improvement over classical NMT in these figures.CHAPTER 4. PERFORMANCE EVALUATION 70Figure 4.24: NMT versus NMT followed by rule-based METEOR scoreFigure 4.25: NMT versus NMT followed by rule-based TER scoreComparison between NMT and ‘Rule-based followed by NMT’ ApproachAfter that, we present the results of ‘rule-based followed by NMT’ (rule-based+NMT), in terms ofMETEOR and TER scores in Figure 4.26 and Figure 4.27 respectively. Here, deep blue lines indicatethe scores obtained using ‘rule-based followed by NMT blending’ approach. These two figures reflectthat ‘rule-based followed by NMT’ approach performs poorly when compared to the classical NMT.In fact, this blending technique proves itself to be the worst performer among all the approaches.CHAPTER 4. PERFORMANCE EVALUATION 71Figure 4.26: NMT versus rule-based followed by NMT METEOR scoreFigure 4.27: NMT versus rule-based followed by NMT TER scoreComparison between NMT and ‘Either NMT or Rule-based’ ApproachFinally, we present the results of ‘either NMT or rule-based’ blending technique in Figure 4.28and Figure 4.29. Here, deep blue lines indicate the scores obtained using this blending approach.This approach performs on par with classical NMT as shown in the figures. Main reason behind thisresult is that most of the sentences are lengthy in this dataset. Since this approach chooses NMTgenerated translation if the length of the sentence is large, it chooses</s>
<s>NMT generated translationsmostly. However, this approach performs at least as good as classical NMT.We can clearly notice that light red lines exceed deep blue lines for most of the sentences inCHAPTER 4. PERFORMANCE EVALUATION 72Figure 4.28: NMT versus NMT or rule-based METEOR scoreFigure 4.29: NMT versus NMT or rule-based TER scoreFigure 4.22 and Figure 4.26. That means, both only rule-based approach and ‘rule-based followedby NMT’ approach perform worse than NMT in isolation. However, deep blue lines exceed light redlines in Figure 4.24 for almost all the sentences, which reflects the clear victory of our ‘NMT followedby rule-based’ approach over NMT in isolation. In addition to that, we notice that light red lines anddeep blue lines are mostly at the same level in Figure 4.28, which reflects the on par performance ofour ‘either NMT or rule-based’ approach as discussed above.CHAPTER 4. PERFORMANCE EVALUATION 73Analysis on Sensitivity of Our Operational ParameterPerformance of our rule-based translator changes as we increase the number of rules or we addmore rules. However, adding rules seems like a never-ending process. Therefore, we analyze howimplementation of different numbers of rules impacts on the performance scores of our differentapproaches.Figure 4.30: Variation of BLEU scores with an increase in the number of implemented rulesBLEU score increases as number of implemented rules increases in our system as shown in Fig-ure 4.30. In this figure, we show performance of three different approaches with respect to an increasein the number of added rules - only rule-based approach, NMT, and ‘NMT followed by rule-based’approach. We notice that the curves of only rule-based approach and ‘NMT followed by rule-base’approach show a gradual increase (initially sharp) in BLEU score as the number of implementedrules increases. Besides, the curves tend to become flat after implementing around 90-100 rules inour system. It depends on the order in which different rules are being added. In our system, weimplement more basic and important grammatical rules such as basic sentence structures (Table 4.2,Table 4.3, and Table 4.4), verb identification, tenses, etc., first. That is why, the curve shows a sharprise in between first 3-10 implemented rules, and then rises consistently until 70-80 rules are added. Inour system, we add the most important rules that significantly improve the translation performanceCHAPTER 4. PERFORMANCE EVALUATION 74within around first 50 rules (specifically, rule number 30-50). Afterwards, addition of more rulesmerely impacts on changing the performance score significantly since those rules such as detection ofsubject’s gender, punctuations, etc., seem to be less contributing compared to the previously added(first 50-60 rules) rules.Nonetheless, the curve for NMT remains flat (parallel to X axis) since performance of NMT doesnot change with the number of implemented rules. Moreover, although the curve of ‘NMT followedby rule-based’ approach exhibits characteristics nearly similar to that of only rule-based approach, itdoes not directly originate from the curve of rule-based approach using any mathematical formula.However, if the performance of translation generated by only rule-based approach improves then ourblending (‘NMT followed by rule-based’ approach) also improves its performance to some extent sinceour system blends with that</s>
<s>improved rule-based translation after generating translation by NMT.This is why, we notice such similarity between these two curves.Similarly, we illustrate variation of METEOR scores with respect to the number of added rules.Figure 4.31 presents the results for only rule-based approach, NMT, and ‘NMT followed by rule-based’approach.Figure 4.31: Variation of METEOR scores with an increase in the number of implemented rulesTrends in these curves for METEOR scores of these three approaches are similar to what we havejust presented for BLEU scores above. Here, we show variation for only ‘NMT followed by rule-based’CHAPTER 4. PERFORMANCE EVALUATION 75approach, as this approach leads all other approaches. Note that the curves in this approach does notstart from zero score, as NMT already sets a positive score that our blending system further increasesby applying rules.Finally, we also show variation of TER scores with an increase in the number of rules in Figure 4.32for only rule-based approach, NMT, and ‘NMT followed by rule-based’ approach. Expectedly, apartfrom curve of NMT, behaviour of remaining two curves for TER scores is exactly opposite to theprevious curves, as TER scores decrease with an increase in the number of rules. Here, less scorerefers to better performance, as the score refers to an error rate. Similar to the previous case, thecurves go almost flat after the addition of first 70 rules.Figure 4.32: Variation of TER scores with an increase in the number of implemented rulesWe present a combined graph (Figure 4.33) containing normalized value of all the metrics for bothrule-based approach and ‘NMT followed by rule-based’ approach. Here, in case of values of each metric,we normalize the values with respect to our found maximum values. The combined presentation ofall the normalized values in Figure 4.33 demonstrates efficacy of our proposed blending approach, asits application improves performance metrics in all cases.That is all about experimentation on performance scores using our literature-based dataset. How-ever, as promised earlier, we also perform similar experimentation using another dataset (full dataset)since scores obtained from only one dataset may not be enough to draw any convincing conclusion onCHAPTER 4. PERFORMANCE EVALUATION 76Figure 4.33: Comparison of normalized performance scores with an increase in the number ofimplemented rules for literature-based datasettranslation performance.4.4.7 Results using Full DatasetNext, we perform experimentation using our combined (literature-based and custom) dataset. Ta-ble 4.9 shows summary of results obtained using this dataset.Score NMT Rule-based NMT+rule-basedRule-based+NMTNMT orrule-basedBLEU 9.28 3.13 12.26 1.34 9.87METEOR 14.18 14.43 22.32 12.86 14.92TER 92.78 93.21 83.83 95.52 92Table 4.9: Comparison among different translation approaches for full (combined) datasetTable 4.9 strongly supports the results obtained earlier (Table 4.7) using our literature-baseddataset. Here, ‘NMT followed by rule-based’ blending approach again outperforms all other ap-proaches. In addition to that, ‘Either NMT or rule-based’ approach remains as our second bestapproach. Next, similar to our literature-based dataset, we show variation of BLEU, METEOR, andTER scores with an increase in the number of rules in Figure 4.34, Figure 4.35, and Figure 4.36CHAPTER 4. PERFORMANCE EVALUATION 77respectively for only rule-based approach, NMT, and ‘NMT followed by rule-based’ approach usingour full dataset.Figure 4.34:</s>
<s>Variation of BLEU scores with an increase in the number of implemented rules for fulldatasetBesides, we also present another combined graph (Figure 4.37) containing normalized value ofall the metrics for both rule-based approach and ‘NMT followed by rule-based’ approach using ourfull dataset. Figure 4.37 reflects that the graph for our full dataset exhibits similar behaviour withrespect to our previous dataset (literature-based). Therefore, we have just double-checked and justifiedour observation on performance scores of different approaches discussed earlier (for literature-baseddataset), using our combined dataset this time.Besides, we perform analysis on time and memory consumption for our implemented methodswith different datasets. To do so, we first calculate average time and memory separately requiredby NMT and rule-based translator for translating a sentence. Then, we determine required blendingtime and memory while applying each of our blending techniques. We find that NMT requires around80 minutes and 90 minutes for training with our literature-based dataset and full dataset respectively.After that, NMT performs inference for generating translations based on its learning acquired throughtraining. We consider only this inference time (or translation time) in determining time required fortranslating each sentence by NMT. On an average, NMT requires 0.08-0.09s (80-90ms) for generatingCHAPTER 4. PERFORMANCE EVALUATION 78Figure 4.35: Variation of METEOR scores with an increase in the number of implemented rules forfull datasetFigure 4.36: Variation of TER scores with an increase in the number of implemented rules for fulldatasetCHAPTER 4. PERFORMANCE EVALUATION 79Figure 4.37: Comparison of normalized performance scores with an increase in the number ofimplemented rules for full datasettranslation of a sentence.4.5 Resource OverheadIn this section, we analyze resource overheads required for our different translation approaches. Specif-ically, we cover time overhead and memory overhead in the subsequent sections.4.5.1 Time OverheadTime overhead increases as number of implemented rules increases in our system as shown in Fig-ure 4.38. In this figure, we show time overhead per sentence translation for three different approacheswith respect to an increase in the number of implemented rules - only rule-based approach, NMT,and ‘NMT followed by rule-based’ approach. Similar to the graphs for performance scores (METEORand BLEU), the curves of only rule-based approach and ‘NMT followed by rule-based’ approach alsoexhibit significant rise for first 60-70 rules. However, unlike those (METEOR and BLEU) graphs,these two curves keep rising slowly rather than getting flat as we keep adding more rules. Besides, thecurve for NMT remains flat (parallel to X axis) since time overhead of NMT does not change withCHAPTER 4. PERFORMANCE EVALUATION 80Figure 4.38: Comparison in variation of time with an increase in the number of implemented rulesfor literature-based datasetthe number of implemented rules.Here, we calculate time overhead of ‘NMT followed by rule-based’ approach as the summation oftime overheads of only rule-based approach, NMT, and blending between them. Table 4.10 shows asummary of time overheads of our aforementioned approaches for different number of implementedrules.Number of rules Rule-based NMT Blending NMT followed by rule-based3 67ms 90ms 4.743s 4.900s10 77ms 90ms 5.053s 5.220s50 163ms 90ms 5.357s 5.610s70 196ms 90ms 5.574s 5.860s120 204ms 90ms 5.699s 5.993sTable 4.10: Time overheads of rule-based, NMT,</s>
<s>and ‘NMT followed by rule-based’ forliterature-based datasetNext, we also explore time overheads of different approaches with respect to an increase in thenumber of implemented rules using our full dataset. Figure 4.39 shows three different curves (rule-based, NMT, and NMT followed by rule-based) generated using our full dataset. We notice thatbehaviour of each of the curves for this dataset remains identical to that of the curves for our previousCHAPTER 4. PERFORMANCE EVALUATION 81Figure 4.39: Comparison in variation of time with an increase in the number of implemented rulesfor full dataset(literature-based) dataset.4.5.2 Memory OverheadIn addition to that, we perform analysis on memory consumption overhead for our different approacheswith respect to different number of rules. Behaviour of memory consumption curves is similar to thatof time overhead curves as shown earlier. Figure 4.40 shows three curves (only rule-based approach,NMT, and ‘NMT followed by rule-based’ approach) reflecting total memory consumption per sentencetranslation with an increase in the number of implemented rules using our literature-based dataset.Here, the curves of only rule-based approach and ‘NMT followed by rule-base’ approach exhibitsignificant rise for first 60-70 rules, whereas the NMT curve remains flat (parallel to X axis). Weconsider unit of memory consumption as kilobytes (KB).Next, we also explore memory consumption of different approaches with respect to an increasein the number of implemented rules using our full dataset. Figure 4.41 shows three different curves(rule-based, NMT, and NMT followed by rule-based) generated using this dataset.CHAPTER 4. PERFORMANCE EVALUATION 82Figure 4.40: Comparison in variation of memory consumption with an increase in the number ofimplemented rules for literature-based datasetFigure 4.41: Comparison in variation of memory consumption with an increase in the number ofimplemented rules for full datasetCHAPTER 4. PERFORMANCE EVALUATION 834.6 Overall ComparisonWe summarize different results (performance scores, time overhead, and memory overhead) obtainedusing our different datasets in Table 4.11 and Table 4.12. Here, Table 4.11 reflects the results obtainedfor our literature-based dataset, and Table 4.12 reflects the results obtained for our full dataset.NMT Rule-based NMT+rule-basedRule-based+NMTNMT orrule-basedBLEU 8.56 1.28 11.43 0.84 8.80METEOR 12.34 13.50 20.31 10.62 12.43TER 93.73 93.90 85.09 96.62 93.50Time (s) 0.090 0.204 5.993 4.011 0.806Memory (KB) 200 610.982 1120.002 998.614 902.100Table 4.11: Comparison among different translation approaches for literature-based datasetNMT Rule-based NMT+rule-basedRule-based+NMTNMT orrule-basedBLEU 9.28 3.13 12.26 1.34 9.87METEOR 14.18 14.43 22.32 12.86 14.92TER 92.78 93.21 83.83 95.52 92Time (s) 0.092 0.203 6.569 4.97 0.807Memory (KB) 200.60 609.702 1204.228 1037 903.341Table 4.12: Comparison among different translation approaches for full dataset4.7 Overall Experimental FindingsNext, we present our overall experimental findings in terms of average percentage (%) improvementof our different blending approaches over different parameters such as BLEU, METEOR, and TER inTable 4.13, Table 4.14, Table 4.15, and Table 4.16. Here, Table 4.13 and Table 4.14 reflect the resultsParameters NMT+rule-based Rule-based+NMT NMT or rule-basedBLEU 34% -90% 3%METEOR 65% -14% 1%TER 9% -3% 0%Table 4.13: Overall percentage (%) improvement over different parameters with respect to NMT forliterature-based dataset(average percentage (%) improvement) for literature-based dataset and full dataset respectively withCHAPTER 4. PERFORMANCE EVALUATION 84respect to NMT,. Note that we find these percentage improvements of our</s>
<s>different approaches withrespect to NMT.Parameters NMT+rule-based Rule-based+NMT NMT or rule-basedBLEU 32% -86% 6%METEOR 57% -9% 5%TER 10% -3% 1%Table 4.14: Overall percentage (%) improvement over different parameters with respect to NMT forfull datasetSimilarly, Table 4.15 and Table 4.16 reflect the results (average percentage (%) improvement) forliterature-based dataset and full dataset respectively with respect to only rule-based approach. Here,we find that % improvements of our different blending approaches with respect to rule-based approachis much higher than NMT approach.Parameters NMT+rule-based Rule-based+NMT NMT or rule-basedBLEU 793% -34% 588%METEOR 50% -21% -8%TER 9% -3% 0%Table 4.15: Overall percentage (%) improvement over different parameters with respect torule-based approach for literature-based datasetParameters NMT+rule-based Rule-based+NMT NMT or rule-basedBLEU 292% -57% 215%METEOR 55% -11% 3%TER 10% -2% 1%Table 4.16: Overall percentage (%) improvement over different parameters with respect torule-based approach for full dataset4.8 Extension of Our Experimental ResultsMachine translation is in practice for long time in different forms such as Example-based MachineTranslation [4], Phrase-based Machine Translation [5], Statistical Machine Translation (SMT) [44],Neural Machine Translation (NMT) [50], etc. NMT is the most recent technology in machine trans-lation, which outperforms all other translation approaches. This is why, we attempt to contributein machine translation keeping NMT as our prime focus, and adopt NMT in our system. However,CHAPTER 4. PERFORMANCE EVALUATION 85NMT depends largely on size and quality of dataset, which we lack significantly for Bengali language.Therefore, we extend our experimentation on another popular machine translation technology, Sta-tistical Machine Translation (SMT). SMT was used by popular Google Translator just before NMT,not more than five years earlier.Besides, M. Mumin et al., recently reported a Phrase-Based Statistical Machine Translation systembetween English and Bengali languages in both directions claiming to have achieved a promising BLEUscore 17.43 for Bengali to English translation. In this regard, we adopt their baseline SMT system [37]to investigate the performance of SMT using our dataset. To do so, first, we implement a popular SMTtoolkit, Moses [59], and we configure the system following their configuration process [37]. Next, wetrain the SMT system with our combined (literature-based and custom) dataset. Finally, we evaluatethe performance of SMT using our dataset.We achieve BLEU score 12.31 using the baseline SMT. In addition to that, we investigate theperformance of our different blending approaches. We blend our rule-based translator with SMT thistime. We present the performance scores of different approaches in Table 4.17.Score SMT Rule-based SMT+rule-based Rule-based+SMT SMT or rule-basedBLEU 12.31 3.13 16.43 2.16 14.14METEOR 15.35 14.43 22.33 13.48 20.92TER 88.14 90.17 82 93.35 85.38Table 4.17: Comparison among different translation approaches considering SMT as baseline systemTable 4.17 reflects that our ‘SMT or NMT followed by rule-based’ approach stills remains the besttranslation approach. Interestingly, performance of ‘SMT followed by rule-based’ approach (BLEU =16.43) is better than ‘NMT followed by rule-based’ approach (BLEU = 12.26) since in this case, SMT(BLEU = 12.31) performs better than NMT (BLEU = 9.28) in isolation. This happens because, ourdataset is not large enough to train an NMT system efficiently. SMT perhaps takes this advantage tooutperform NMT by a small margin for this dataset. Besides,</s>
<s>our best approach (BLEU = 16.43) lagsbehind their proposed approach (BLEU = 17.43) in terms of overall performance score because of ourinsufficient dataset again. They trained their system with a large dataset containing 197,338 parallelBengali-English sentences, which is more than 16 times larger than our current dataset. However,their dataset is not made publicly available.Nonetheless, SMT scores 16.91 using their dataset [37], whereas SMT scores 12.26 using ourdataset. They achieve BLEU score 17.43 in their approach over SMT score 16.91 [37], which offersCHAPTER 4. PERFORMANCE EVALUATION 86an improvement of 3.1% over SMT. However, our best translation approach achieves BLEU score16.43 over SMT score 12.31, which offers 33.5% improvement over SMT. Therefore, we expect toachieve much higher BLEU score when we can match their dataset in future. In addition to that,this extended experimentation leads to an important finding - “Any translation generated by machine(NMT or SMT) can be significantly improved after blending with rule-based translator”.4.9 Extending Our Study to A High-Resource ContextPerformance of data-driven translators (NMT or SMT) largely depends on availability of significantamount of training data. However, the largest dataset used in our experimentation presented so farconsists of up to 11,500 parallel Bengali-English sentences. Only 11,500 sentences may not reallysatisfy the need for significant amount of training data for an NMT system mimicking the context oflow-resource language.However, we are yet to show what would happen if we take our approach to a high-resource context.Therefore, we extend our study to a high-resource context by developing a larger Bengali-Englishparallel corpus containing more than one million sentence pairs. We summarize the performance scoresof our different approaches obtained using this dataset in Table 4.18. Table 4.18 again establishes thatour ‘NMT followed by rule-based’ approach performs the best over all other alternative approaches.Score NMT Rule-based NMT+rule-basedRule-based+NMTNMT orrule-basedBLEU 13.43 4.16 18.73 2.89 14.51METEOR 24.82 16.22 31.30 13.65 26.87TER 85.79 88.50 77.94 92.84 83.14Table 4.18: Comparison among different translation approaches for a high-resource contextNext, we present the improvement in performance scores of all the approaches with respect to anincrease in the size of dataset in Figure 4.42. The figure shows that performance improves with anincrease in the size of dataset. Here, we also show a comparison between NMT and ‘NMT followed byrule-based’ approach in terms of BLEU scores using our different datasets. Note that we find the bestperformance score (BLEU = 18.73) after extending our experimentation to the high-resource context(with one million sentence pairs), which is substantially higher than our previous best score (BLEU= 12.43) obtained for the low-resource context (with 11,500 sentence pairs).CHAPTER 4. PERFORMANCE EVALUATION 87Figure 4.42: Comparison between NMT and ‘NMT followed by rule-based’ approach in terms ofBLEU scores with different datasetsFinally, we present the improvement in performance scores of all the approaches with respect to anincrease in the number of training steps in Figure 4.43. The figure shows that performance improvesas we increase the number of steps for training the data-driven translators.Figure 4.43: Comparison between NMT and ‘NMT followed by rule-based’ approach in terms ofBLEU scores with respect to an increase in the number</s>
<s>of training stepsChapter 5Analogy to Human Behaviour: ACasual Cross Checking to OurProposed Methods and Their ResultsAt this point, we perform a casual cross checking to our proposed methods and their results withrespect to human behaviour. Note that the idea of our proposed translation approaches actually comesfrom how people approach translation in real life. In this regard, we conduct a survey to identify howpeople generally perform translation from one language to another such as from Bengali to English.Around 150 participants respond to this survey by sharing their own translation approach. The surveybasically presents a very simple question to the participants on how they perform translation fromone language to another (Bengali to English in our case). As the possible answers to the question, weprovide all possible options. Thus, we form the question as follows.Question: How do you prefer to translate from one language to another language (for example, Bengalito English)?Use experience only (how others speak and reading bilingual books) with no formal grammaticalknowledgeStrictly stick to applying knowledge on grammatical rules onlyApply both grammatical rules and experience from various sources in any orderApply formal grammatical rules first to translate initially, and then try to use experience (onyou have seen or heard something like your initially translated sentence) to modify it (may beCHAPTER 5. ANALOGY TO HUMAN BEHAVIOUR: A CASUAL CROSS CHECKING TOOUR PROPOSED METHODS AND THEIR RESULTS 89slightly) to get more accurate translationApply your experience first, then apply grammatical checking to make the translated sentencemore accurateTranslate separately using only grammatical rules and only experience. Then, decide any one ofthem without mixing one with another at all5.1 Demography of Survey ParticipantsNext, we present the demography of the participants. People from different ages, genders, and back-grounds take part in our survey. Besides, both Bengali-English speakers and non-speakers respond toour survey. We present the demography of participants in Figure 5.1.(a) Age (in years) (b) Gender(c) Background (d) Speaking languageFigure 5.1: Demography of survey participantsCHAPTER 5. ANALOGY TO HUMAN BEHAVIOUR: A CASUAL CROSS CHECKING TOOUR PROPOSED METHODS AND THEIR RESULTS 90The figure shows that our participants cover different ages (Figure 5.1(a)), genders (Figure 5.1(b)),backgrounds (Figure 5.1(c), and speaking languages (Figure 5.1(d). We note that majority of theparticipants are male students, aged in between 18-24 years. In addition to that, people having noformal educational background (4%) also take part in our survey.5.2 Survey ResultsMajority of the survey participants respond that they use experience first before applying grammaticalrules to translate from one language to another. This process mimics our ‘NMT (or SMT) followedby rule-based’ translation approach. This happens as experience generally refers to learning howothers communicate or speak in the target language along with reading materials in that language.It is similar to the learning process of our mother tongue. Here, we interpret ‘using experience’ as‘translation generated by machine (NMT or SMT)’ since machine translates based on its learningacquired through rigorous training with corpus (datasets). Through using the experience, a forma ofthe target sentence generally gets generated that is also done by the NMT (or SMT). Therefore, thecase of using</s>