text
stringlengths
41
31.4k
<s>A Crowd-Source Based Corpus on Bangla to English Translation21st International Conference of Computer and Information Technology (ICCIT), 21-23 December, 2018A Crowd-Source Based Corpus on Bangla toEnglish TranslationNafisa NowshinComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshnafisanowshin107@gmail.comZakia Sultana RituComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshzakiaritu.cse@gmail.comSabir IsmailComputer Science andEngineeringStony Brook UniversityNew York, United Statessabir.ismail@stonybrook.eduAbstract—In this paper, we present a crowd-source basedBangla to English parallel corpus and evaluate its accuracy. Acomplete and informative corpus is necessary for any languagefor its development through automated process. A Bangla toEnglish parallel corpus has importance in various multi-lingualapplications and NLP research works. But there is still scarcityof a complete Bangla to English parallel corpus. In this paperwe propose a large scale crowd-source method of constructionof a Bangla to English parallel corpus through crowd-sourcing.We chose crowd-sourcing method to venture a new approachin corpus construction and evaluate human behavior pattern indoing so. The translations were collected form under graduatestudents of university to ensure strong language knowledge.A Bangla to English parallel corpus will help in comparinglinguistic features of these languages. In this paper we presentan initial dataset prepared via crowd-sourcing which will serveas a baseline for further analysis of crowd source based corpus.Our primary dataset is consists of 517 Bangla sentences and forevery Bangla sentence, we collected 4 English sentences on anaverage and 2143 English sentences in total via crowd-sourcing.This data was collected over a period of 2 months and from 62users. Finally we analyze the dataset and give some conclusiveidea about further research.Keywords—Natural Language Processing(NLP) , machinelearning, corpus, crowd-source data, Bangla to English translation.I. INTRODUCTIONA corpus is basically a collection of written texts or spokenmaterial of a language that is processed to learn about thatlanguage’s behavior. Construction of a corpus is the fun-damental of most language research works. In this sectorBangla is still a little behind than other languages. Thereare a number of Bangla corpora available right now, likeSUMono [1], BdNC01 [2]. But none of these corpora areconstructed based on crowd-sourced data. So we wanted toimplement crowd source method in constructing a Bangla toEnglish parallel corpus. Crowd-sourced data means, the dataor information obtained for a particular task or project byenlisting the services of a large number of people, typicallyvia the Internet. So, what we proposed to do is, by the meansof crowd-sourcing, collect the text data of Bangla to Englishtranslation and evaluate the data collected to understand thevariations and structure of the data.A parallel corpus is a corpus that contains collection oforiginal texts of a language and their translation in a set oflanguages. In our case, the Bangla to English parallel corpushas Bangla text data and it’s English translation. parallelcorpora has many uses in various fields, like comparinglinguistic features of two languages, investigating similaritiesand differences between the source and the target language,helps in translation studies and machine translation relatedresearches. A Bangla to English parallel corpus will help usin all of these sectors and also as it is prepared via crowd-sourcing we also get the information of behavioral pattern ofusers while translating the sentences. This can help us furtherin machine</s>
<s>translation researches regarding Bangla languageand will help determine the behavioral pattern in that case.Although crowd-sourced data is relatively new in NLPrelated research sectors, it is gaining popularity fast as trainingmodels for machine learning. This process helps get new in-sights and helps incorporate understanding of human behaviorwith regard to machine learning. For all these reasons theimportance of crowd-sourced data is increasing rapidly. In thispaper we have tried to propose a new method of parallel corpusconstruction applying this popular method.This paper is arranged as follows, in section 2, we have shedlight on the previous works regarding corpus construction andtried to give an overview of the present condition in this sector.In section 3 we have discussed our reasons for choosing thismethod. Then in section 4 we have discussed in detail thefull methodology of our work with examples of our data. Weshowed an analysis of the data collected in section 5. Weconclude in section 6.II. BACKGROUND STUDYCorpus construction is one of the most important part of anytype of language research work. The strength of the digitalpresence of a language depends on the availability of thatlanguage’s proper and complete corpus. So, much attentionhave been given in corpus construction in NLP sector. Banglais no different, much work has been done, and many processeshas been evaluated in constructing a complete Bangla corpus.We will discuss some of these works below.978-1-5386-9242-4/18/$31.00 ©2018 IEEEThe process of constructing a Bangla corpus started longago. Dash and Chaudhuri [3], constructed a small scale Banglacorpus along with 9 other Indian languages called the CIILcorpus. It consists of only 3 million words. Because of thesmall size of this corpus, it has failed to ensure its represen-tativeness of Bangla language.Automatic Bangla Corpus Creation was attempted bySarkar, Pavel and Khan [4]. The process they followed wasthat they collected all free Bangla documents from the webwith the help of a web crawler and collected available offlineBangla text documents. Then they extract all the words inthese documents to make a huge repository of text and thenconverted them to unicode text.Salam, Yamada and Nishino [5] proposed the first balancedcorpus for Bangla language. They built the corpus dependingon three independent criteria, time, domain and medium. Astheir goal was to construct a balanced corpus, they also addednecessary additional details to the collected text like samplesize, details of the author, topic etc. The source of their datawas, literature text data, Bangla academic papers, Bangla textbooks, newspaper articles, TV and radio news scripts, Banglatechnical manuals, Legal documents written in Bangla. We cansee that to make the corpus representative and balanced theycovered a wide range of text sources.Mumin, Shoeb, Selim and Iqbal [1], constructed a newBangla corpus named SUMono. This corpus consists of morethan 27 million words. The SUMono corpus was constructedfrom available online and offline Bangla text data that includesarticles from six types of topics. This corpus was constructedfollowing the framework of the American National Corpus(ANC). SUMono corpus includes written texts from writersof various backgrounds, Bangla newspaper articles availableonline, Bangla text data from various websites etc. Becauseof the variety of the types of data</s>
<s>available in this corpus, itsrepresentativeness of Bangla language has been ensured.They also built a English-Bengali parallel corpus which isknown as SUPara [6]. In building this corpus their main focuswas to make it a balanced corpus. It contains variety of textsfrom different domains. They first converted the plain texts tounicode and then they were marked up according to corpusencoding standard. This corpus is open for educational andresearch purpose.There also exists specified variations of Bangla corpus. Agood example of which is the corpus named “Prothom-Alo”.[7] which is a corpus built solely with news articles publishedin a popular Bangla newspaper named “Prothom-Alo”, for theyear 2005. They first collected the texts from the website of thenewspaper. Then the text was extracted and categorized. Thenthey were converted to Unicode. But as this corpus consists ofvery specified data, it can not be used in many NLP researchworks.Khan, Ferdousi and Sobhan [2] created a new Bangla textcorpus named “BdNC01”. Text source for this corpus is,articles collected from web editions of several influential dailynewspapers and literary works of old and modern writers.It contains nearly 12 million words. The text data for thiscorpus was collected over a time of 6 years to avoid timedependencies. After collection and processing of text data,it was added to the repository and statistical computationswere done on it for better understanding of Bangla linguisticbehavior.Shamshed and Karim [8]. also proposed a method forBangla text corpus creation. They proposed to use this corpusfor Efficient Information Retrieval system. As they propose touse this corpus for information retrieval, all the text in theircorpus are document specified. Their text source was Banglabooks and Bangla web data. After collecting and formattingtext data, they calculated term frequencies and then appliedrandom walk algorithm on the data. Then they had to assemblethe meta data.Finally, we can say that there is rich literature growing oncorpus construction techniques and there is much scope ofimproving this sector. Most research works discussed in thissection has more or less same type of development process.They varied in their text source, their size and their collectionof various topics to represent Bangla language. But the mostimportant factor to be noted from this discussion is that noneof these works involve crowd-sourced data. As a matter of fact,the process of constructing a text corpus using crowd-sourceddata has not been attempted before. So, we are proposing anew process of corpus construction.III. WHY CROWD-SOURCED CORPUSThere has been various approaches to parallel corpus con-struction process. They mostly focus on collecting the textdocument of one language from web pages or written textfiles and then converting them to unicode. Then they aremarked up according to corpus encoding standard for XMLand then aligned. But We tried a new approach, first weconstructed the Bangla corpus containing simple and smallBangla sentences and then collected crowd-sourced data forthe English translation of these sentences. This way we gotmore than one translated sentence for each Bangla sentenceand could compare the output. This process also gives usinsight on human behavior in case of translation of onelanguage to another. The process is discussed in detail in thenext</s>
<s>section.IV. METHODOLOGYA. Data PreparationIn the first step, we focused on preparing the Bangla textdata. The Bangla text data in our corpus consists of simple andsmall Bangla sentences, mostly with only one verb. We haveworked with almost the same sentence pattern, the change inthe sentence with the change of tense of verb and with thechange of person of verb. This way we got many variationsof one sentence. The reason behind doing this was to comparethe result we get from crowd-sourcing and see their behavioralpattern with small change in sentences. Below is some of theexamples of the sentences that is present in our corpus-• আিম ভাত খাই।• আিম ভাত খািচ্ছ।• আিম ভাত খািচ্ছলাম।• আিম ভাত খােবা।• বাবা বাজাের েগেছন।• বাবা বাজাের যােবন।• বাবা বাজাের যােচ্ছন।• বাবা িক বাজাের েগেছন?• কৃষক েক্ষেত কাজ করেত যােচ্ছ।• েস িক ঢাকা শহের বাস কের?• বৃিষ্ট না হেল আমরা বাইের যাব।• েখলাধুলা সব্ােস্থর জেন উপকারী।For preparing this text corpus we went through some Banglato English translation books. We prepared the corpus bytaking help from school level English grammar books [9].These books cover Bangla to English translation and grammarstructures. As can be seen from the example of the sentencesabove, we tried to cover assertive, interrogative, negative,conditional and imperative type of sentences. We also triedto focus on the variations of gender, tense, person of the samesentence. The details regarding the Bangla part of the corpusis given in table I.Table I: DETAILS OF BANGLA PART OF THE CORPUSTotal sentences 517Total words 2352Average sentence length 5 wordsFig. 1. Statistics of the Bangla SentencesB. Data CollectionThe English translation of our Bangla sentences were col-lected through crowds-sourcing. for this purpose, we devel-oped a web interface for collecting translations from people.Figure 2 and Figure 3 show some of the screen shots of theinterface.Using this website we collected data from people. As seenfrom the photo of the interface, we gave them a random Banglasentence from the corpus and they had to add the Englishtranslation of the respective sentence. We collected 3 to 5Fig. 2. The Sentence ListFig. 3. Adding English translation of a sentenceEnglish translated sentences for each Bangla sentence whichresults in 4 translated sentences against each Bangla sentenceon an average. For the 517 Bangla sentences in our corpus wegot a total of 2143 English translated sentences. The detailsof the English part of the corpus is given in table II.Table II: DETAILS OF ENGLISH PART OF THE CORPUSTotal sentences 2143Total words 13062Average sentence length 6 wordsIn table III we show some of the translated sentencescollected through crowd-sourcing and the data we have gotthrough it.These data were collected from a group of universitystudents where medium of study is English and Bangla is thefirst language. Our dataset consists of mostly simple sentences,and the user group chosen for collecting the data are well adaptand capable in translating them. There were 62 contributors intotal for preparing this dataset. The data was collected over aperiod of 2 months. Source code of the website that has beenused for collecting translations is available in github [10] alongwith the collected translations.V. RESULT ANALYSISAs</s>
<s>stated earlier, for each Bangla sentence we got 4English translated sentences on an average but number oftranslations received for any sentence varied with sentenceTable III: COLLECTED DATABangla sentence English translationবাবা বাজাের যােবন।• Dad will go to bazar.• Father will go to the market.• father will go to the market• Dad will go to market.• Father will go to office.বাবা িক আজ বাজােরযােবন?• Will father go to market today?• Will father go to bazar today?• Will father go to bazar today?আিম এখন ভাত খােবানা।• I won’t eat rice now• I will not eat rice now.• I won’t eat rice now.• i wont eat rice nowআিম গতকাল ব স্ত িছ-লাম।• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.বাচ্চারা মােঠ েখলেছ।• children are playing in the field.• Kids are playing in the field.রিহম েবড়ােত যােচ্ছ।• Rahim is going to visit.• Rahim is going outside.• Rahim is going to a tour.তুিম িক কাজিট েশষকেরছ?• Are you finished the job ?• Have you done the work?• did you finish the work?কৃষক িক েক্ষেত কাজকরেছ?• is farmer working on his farm.• Farmer is working in the field?• Is Farmer working in the field?• Is farmer working in the field.েখলাধুলা সব্ােস্থর জেনউপকারী।• sport is beneficial for health.• The sport is beneficial for health.• Sports are beneficial for health.• Sports is better for health.Table IV: TIME AND CONTRIBUTORTotal Contributors 62Time Required 2 Monthslength. Sentences of length 3, 4 and 5 was translated mostlyby users. The average number of translations received for anysentence length is shown in the graph in figure 5.As seen in the previous section, we got a number oftranslated sentences for each Bangla sentence. The translatedsentences has some variations from user to user. We discussthese variations and the reasons behind them in this section.In case of very simple and small sentences all the transla-tions we got are almost same and correct. for example-1) আিম ভাত খাই না।• I don’t eat rice.• I don’t eat rice.• I do not eat rice• I don’t eat rice.As seen in the above example, the sentence is very smalland simple and there is not much variation in the wayFig. 4. An overview of user contributionFig. 5. Statistics of Average translations of Bangla sentenceaccording to length (in words)different people translated it. But when the sentence hasnouns and pronouns the translation gets more varied. forexample-2) বাবা বাজাের েগেছন।• Dad went to the market.• Father went to bazar.• Father has gone to the market.• Father has gone to the market.• Father has gone to Market.Here for the noun word 'বাবা' there can be two Englishwords, ’Father’ and ’Dad’ which can be used alterna-tively and both are correct. Same can be said for theword 'বাজাের'. While most people translated it to theEnglish word ’market’, one user has treated it as propernoun and translated it to ’bazar’. Similarly synonyms ofwords can be used alternatively by different users whiletranslating. For example-3) বাচ্চারা মােঠ িকৰ্েকট েখলেছ।• The kids are playing cricket on the field.• Children are</s>
<s>playing cricket in the field.• Children are playing cricket in the playground.• Kids are playing cricket in the playground.• Children are playing cricket in the field.Here for the word 'বাচ্চারা', two synonymous Englishwords ’kids’ and ’children’ has been used alternativelyand the same thing happened in case of 'মােঠ', which canbe translated to both ’playground’ and ’field’. But thereal problem arouses in case of universal truths. Differ-ent people translate these types of sentences differently.For example-4) দুভর্াগ বান তারাই যােদর পৰ্কৃত বনু্ধ েনই।• Unlucky are those who don’t have real friend.• Those who do not have true friends are unfortunate.• Unlucky are those who don’t have real friends.• Those are unfortunate who do not have true friends.This much variation occurred in this example becauseuniversal truth sentences do not usually have a fixed sen-tence structure. As a result they are perceived differentlyby different people and the translation gets varied.So, from the discussion above we can say that the alternativeuse of nouns, pronouns and synonyms mostly create thevariations in the translation process. The sentences containinguniversal truths also need to be handled differently. So, furtherwork is needed to resolve these issues.VI. CONCLUSIONSCrowd-sourced data can serve as a promising method ofcorpus construction in future. It has the advantage of reflect-ing human behavior while translating from one language toanother. This method needs further analysis and more data toconstruct a complete corpus. Here we worked with an initialdataset to understand this method’s performance and issuesregrading the corpus construction. The issues found in analysisof this data needs to be resolved in further works.References[1] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal,“Sumono: A representative modern bengali corpus,” SUST Journal ofScience and Technology, vol. 21, pp. 78–86, 2014.[2] S. Khan, A. Ferdousi, and M. A. Sobhan, “Creation and analysis of anew bangla text corpus bdnc01,” International Journal for Research inApplied Science & Engineering Technology (IJRASET), vol. 5, 2017.[3] N. S. Dash, B. B. Chaudhuri, P. Rayson, A. Wilson, T. McEnery,A. Hardie, and S. Khoja, “Corpus-based empirical analysis of form,function and frequency of characters used in bangla,” in Published inRayson, P., Wilson, A., McEnery, T., Hardie, A., and Khoja, S.,(eds.)Special issue of the Proceedings of the Corpus Linguistics 2001 Con-ference, Lancaster: Lancaster University Press. UK, vol. 13, 2001, pp.144–157.[4] A. I. Sarkar, D. S. H. Pavel, and M. Khan, “Automatic bangla corpuscreation,” BRAC University, Tech. Rep., 2007.[5] K. M. A. Salam, S. Yamada, and T. Nishino, “Developing the firstbalanced corpus for bangla language,” in Informatics, Electronics &Vision (ICIEV), 2012 International Conference on. IEEE, 2012, pp.1081–1084.[6] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal,“Supara: A balanced english-bengali parallel corpus,” 2012.[7] K. M. Majumder and Y. Arafat, “Analysis of and observations from abangla news corpus,” 2006.[8] J. Shamshed and S. M. Karim, “A novel bangla text corpus buildingmethod for efficient information retrieval,” Journal of ConvergenceInformation Technology, vol. 1, no. 1, pp. 36–40, 2010.[9] Chowdhury and Hossain, Advanced Learner’s Communicative EnglishGrammar & Composition for Class-6 First & 2nd Paper, twenty 1st</s>
<s>ed.Advanced Publication, 2016.[10] “Crowd sourced translator and corpus construction project,” [On-line]. Availa ble: https://github.com/ZakiaRitu/Crowdsource_translator/,last accessed 5 November 2018.</s>
<s>Sequence-to-sequence Bangla Sentence Generation with LSTM Recurrent Neural NetworksScienceDirectAvailable online at www.sciencedirect.comProcedia Computer Science 152 (2019) 51–581877-0509 © 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the scientific committee of the International Conference on Pervasive Computing Advances and Applications – PerCAA 2019.10.1016/j.procs.2019.05.02610.1016/j.procs.2019.05.026 1877-0509© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the scientific committee of the International Conference on Pervasive Computing Advances and Applications – PerCAA 2019.Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University,</s>
<s>Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access</s>
<s>article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).http://crossmark.crossref.org/dialog/?doi=10.1016/j.procs.2019.05.026&domain=pdf52 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019)</s>
<s>51–582 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000RNN can not handle backdrop very well but LSTM can. RNN have limitation of memory but LSTM don’t haveany limitation of memory problen in long going dependency. RNN suffers from the same vanishing (or less notoriousexploding) gradient problem as fully connected networks but LSTM can vanish gradient properly. LSTM is betterthan RNN because LSTMs are unequivocally intended to stay away from the long haul reliance issue. Recollectingdata for significant lots of time is for all intents and purposes their default conduct, not something they battle to learn.1.1. Dataset PropertiesThe neural network we made was trained with Bangla newspaper corpus. We collected a newspaper corpus of 917days newspaper text from Prothom Alo online. The web scraping with Python was helped a lot for doing this workautomatically. The training dataset contains the properties like-• Total 917 days newspaper text.• The daily newspaper text contains average 4500 sentences.• 4500 sentences contains 12,500 words.• 12,500 words contains about 15,5000 characters in average.2. Literature ReviewWe are proposing a model which can generate sequence-to-sequrnce Bangla Text. There are many researchand development works in this field. But hardly we can find text generation related works with LSTM for Banglalanguage. That’s why we determined to make our own dataset and our own prediction model.Naveen Sankaran et al. proposed a formulation, where they recognized a task which makes model as training of asequential translation method [1]. They worked for converting words from a document into Unicode sequence directly.Praveen Krishnan and et al. introduced an OCR system which pursues a combined architecture in seven differentlanguages of India and a segmentation free method [2]. Their system was proposed to assist the continuous learningin the time of being it usable, like continuous user input. They worked with BLSTM method, another form of generalLSTM.A character-based encoder-decoder model which is acquired to transliterate sequence to sequence consists byAmir H. Jadidinejad [3]. The proposed an encoder built with Bidirectional Recurrent Neural Network that encodes asequence of symbols into vector reprsentation with fixed length.The effects of the SIGMORPHON 2016 combined task specified that the attentional sequence-to-sequence modelof Bahdanauet is proper for this task [4] [5].Robert Ostling and Johannes Bjervas proposed a model which was constructed with sequence-to-sequenceartificial neural network and LSTM architecture that was a big attention to enthusiasts[6].Yasuhisa Fujii et al. considered line-level script documentation papers in the context of multilingual OCR.They considered some alternatives of an encoder-summarizer method in the framework of an up-to-date multilin-gual OCR structure and they used an estimate set of several-domain streak photos from 232 languages in 30 scripts [7].A DNN based SPSS system was made by Sivanand Achanta and et al. which is representing the audio parametricclassifications of things with a single vector by sequence-to-sequence auto-encoders [8].Mikolov et al. have established the importance of allocated images and the competence to model randomlyextensive needs using Current RNN based language models[9] [10]. Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 53Md. Sanzidul Islam et al.</s>
<s>/ Procedia Computer Science 00 (2019) 000–000 3Sutskever et al. produce significant sentences by modifying a RNN as well through acquiring from a character-levelcorpus. [11]. They introduced a newly made RNN model that one works as multiplicative connectors.Karpathy and et al. have ensured that an RNNLM is more effective of making image explanations on thepre-trained model by training the neural network model with RNN[12]. They tried to construct a model architectureof multimodal RNN.Zhang and Lapata are also explains remarkable work using RNNs to create Chinese poetry [13]. It was a goodinitiative in that time which could able to generate some lines of chinese poem autometically.Mairesse and Young suggested a phrase-based NLG method was proposed on factored LMs that can realize after asemantically united corpus [14]. They focused their crowd sourced data and shown how to work with that.Even though active learning was similarly recommended to accept absorbing online directly from operators, thenecessity for human interpreted alignments boundaries the scalability of the scheme by Mairesse et al. [15].One more related approach throws NLG as a pattern extraction and matching problem by Angeli et al. [16].Kondadadi et al. display that the outputs can be more improved by an SVM ranker creating them equivalent tohuman-authored texts [17]. They proposed a approach by end-to-end generation technique with some local decisions.Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney suggest a novel sequence-to-sequence model togenerate captions for videos. They made explanations with a sequence-to-sequence model, where frames are firstread sequentially and then words are made serially [18].3. Method Discussion3.1. RNN StructureThe LSTM network is a special type of RNN. The RNN is neural network which attempts to model sequence ortime dependent in regular behavior. This is done by output feeding back of a neural net layer in time t to the inputlayer at time-t + 1 (1)It looks like this [20]-Fig. 1. Sequential nodes of Recurrent Neural Network.54 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–584 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Recurrent Neural Networks could be described as Unrolled programmatically at the time of training and testing.So, we can see something like [20]-Fig. 2. Unrolled Recurrent Neural Network.The figure showing here a new word is being supplied in every step with the previous output (i.e. ht-1) and thatone also being supplied at next.Basically, RNNs are amazingly able to handle the long-term dependencies. The issue was noticed in details byHochreiter (1991) and Bengio, et al. (1994), who showed some pretty basic causes why it might be difficult. Thatswhy we will use LSTM, a better form of RNN.3.2. LSTM NetworksThe LSTMs are called Long Short Term Memory (LSTM) which are a special type of RNN, capable of learninglong-term dependency problem. Thhis one was discovered by Schmidhuber and Hochreiter in 1997, and were updatedand spreaded by many people in that work.LSTMs are actually made for avoiding the long-term dependency issue. Keeping information in long periods of timeis their actual default behavior, nothing what they struggle to be trained! The graphical representation</s>
<s>of LSTM cellcould be shown as below [20]-Fig. 3. LSTM Cell Diagram. Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 554 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Recurrent Neural Networks could be described as Unrolled programmatically at the time of training and testing.So, we can see something like [20]-Fig. 2. Unrolled Recurrent Neural Network.The figure showing here a new word is being supplied in every step with the previous output (i.e. ht-1) and thatone also being supplied at next.Basically, RNNs are amazingly able to handle the long-term dependencies. The issue was noticed in details byHochreiter (1991) and Bengio, et al. (1994), who showed some pretty basic causes why it might be difficult. Thatswhy we will use LSTM, a better form of RNN.3.2. LSTM NetworksThe LSTMs are called Long Short Term Memory (LSTM) which are a special type of RNN, capable of learninglong-term dependency problem. Thhis one was discovered by Schmidhuber and Hochreiter in 1997, and were updatedand spreaded by many people in that work.LSTMs are actually made for avoiding the long-term dependency issue. Keeping information in long periods of timeis their actual default behavior, nothing what they struggle to be trained! The graphical representation of LSTM cellcould be shown as below [20]-Fig. 3. LSTM Cell Diagram.Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000 54. Proposed Methodology4.1. Dataset PreprocessingWorking with Bangla is too much difficult still now as there has no much resource and R&D works in this field.So, processing Bengali text data a difficult task as these are too noisy and also not suitable for working with ma-chine learning or deep learning approaches. We did some preprocessing work for making our dataset noise-free andperforming its best in neural network, like-• Removed all Bengali punctuation marks.• Removed extra spaces and new lines• Converted the text into utf-8 format.4.2. Proposed MethodIn general an LSTM network is complex comparative to other methods. It consume a much power in hardware ansmachines capability. The whole interior activities and logic flow could be presented as below-1) Input: Firstly, The input is squashed with the tanh activation function between-1 and 1. This could be expressed by-g = tanh(bg + ttUg + ht−1Vg) (2)Where UgUg and VgVg are the previous weights of cell output and inputs. In other side bgbg is performing as aninput bias. Remember, the exponents (g) is only considering as input weights.i = σ(bi + xtUi + ht−1Vi) (3)The equation 4 is considered as output of LSTM input section-g ◦ i (4)Here the ◦ is elements-wise multiplication.2) Forget state loop and gate: the output forgotten gate expression is-f = σ(b f + xtU f + ht−1V f ) (5)The product output shows the position of previous state and forgotten gate. The equation for this calculation is-st−1 ◦ f (6)The output of forgotten loop is calculated in another strategy. For different time frame-st = st−1 ◦ f + g ◦ i (7)3) Output gate: Necessary output gate is evolved as-◦ = σ(b◦ +</s>
<s>xtU◦ + ht−1V◦) (8)So that the cell final output, with tanh squashing, can be expressed as-ht = tanh(st) ◦ O (9)56 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–586 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Finally, a very common form of LSTM networks equation can be written from Colahs famous blog post [21]-Fig. 4. LSTM networks equation.That’s how the Long-short-term-memory (LSTM) network do the operations sequentially. That’s why it performsuperior in any type of sequential data. The LSTM network activity flow could be presented as the figure given below.There we can notice some time evaluation term what’s for LSTM is different.Fig. 5. LSTM networks activity flow.4.3. Layer DescriptionGenerally a neural network contains three layers for taking input, doing calculation and giving decision. A inputembedding layer was taken as initial layer of neural network as input layer. Here a single line of text is being trainedone after one and sequentially.Then the hidden layer was taken place. It could be explain as the main LSTM layer and did it for 100 units.The final and output layer is described now. An activation function is applied here named softmax. Softmax cal-culates the probability of event distribution over n events. This function generally calculates the probabilities of eachtarget class across all possible target classes.S (yi) =eyii eyi(10) Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 57Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000 74.4. Model ValidationThe LSTM model is little different in validation perspective. Performance determination with cross validation ortrain-test accuracy in general like CNN model [22] is not practical. It actually better to test the model with real dataand its output. We trained only one weeks news paper corpus for having limitation of hardware limitation. And finallydid test with different Bengali words, then the model generated some text according to previous text. Here are twogenerated Bangla sentences with our model-Fig. 6. Testing model (example-1).Fig. 7. Testing model (example-2).5. Future WorkIn this paper we worked with less data, due to hardware limitations. Afterwards we will enhance our dataset.In future we will improve the model for achieving multi task sequence to sequence text generation and multi waytranslation like Bengali articles, caption generation. Furthermore we would aim to pursue the possibility of extendingour model to Bangla regional languages. We also has plan to work with Bangla Sign Language [23] generation withsequential image data as like general people language.References[1] Naveen Sankaran T, Aman Neelappa, C V Jawahar, Devanagari Text Recognition: A Transcription Based Formulation, 12th InternationalConference on Document Analysis and Recognition, 25-28 Aug. 2013, Washington DC, USA.[2] Praveen Krishnan, Naveen Sankaran T, Ajeet Kumar Singh, C V Jawahar, Towards a Robust OCR System for Indic Scripts, International Work-shop on Document Analysis Systems, Centre for Visual Information Technology, International Institute of Information Technology Hyderabad- 500 032, INDIA, April 2014.[3] Amir H. Jadidinejad, Neural Machine Transliteration: Preliminary Results, arXiv:1609.04253v1 [cs.CL] 14 Sep 2016.[4] Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner,</s>
<s>and Mans Hulden. The sigmorphon 2016 shared task:Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, andMorphology. Association for Computational Linguistics, Berlin, Germany, pages 1022, 2016.[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, Neural machine translation by jointly learning to align and translate, CoRRabs/1409.0473, 2014.[6] Robert Ostling and Johannes Bjerva, SU-RUG at the CoNLLSIGMORPHON 2017 shared task: Morphological Inflection with AttentionalSequence-to-Sequence Models, arXiv:1706.03499v1 [cs.CL] 12 Jun 2017.[7] Yasuhisa Fujii, Karel Driesen, Jonathan Baccash, Ash Hurst and Ashok C. Popat, Sequence-to-Label Script Identification for MultilingualOCR, Google Research, Mountain View, CA 94043, USA, arXiv:1708.04671v2 [cs.CV] 17 Aug 2017.[8] Sivanand Achanta, KNRK Raju Alluri and Suryakanth V Gangashetty, Statistical Parametric Speech Synthesis Using Bottleneck Representa-tion From Sequence Auto-encoder, Speech and Vision Laboratory, IIIT Hyderabad, INDIA, arXiv:1606.05844v1 [cs.SD] 19 Jun 2016.[9] Tomas Mikolov, Martin Karafit, Lukas Burget, JanC ernocky, and Sanjeev Khudanpur, Recurrent neural network based language model, InProceedings on InterSpeech, 2010.[10] Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan H. Cernocky and Sanjeev Khudanpur, Extensions of recurrent neural network languagemodel, In ICASSP, 2011 IEEE International Conference on, 2011.58 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–588 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000[11] Ilya Sutskever, James Martens and Geoffrey E. Hinton, Generating text with recurrent neural networks, In Proceedings of the 28th InternationalConference on Machine Learning (ICML-11), ACM, 2011.[12] Andrej Karpathy and Li Fei-Fei, Deep visual semantic alignments for generating image descriptions, CoRR, 2014.[13] Xingxing Zhang and Mirella Lapata, Chinese poetry generation with recurrent neural networks, In Proceedings of the 2014 Conference onEMNLP, Association for Computational Linguistics, October, 2014.[14] Francois Mairesse and Steve Young, Stochastic language generation in dialogue using factored language models, Computer Linguistics, 2014.[15] Francois Mairesse, Milica Gasic, Filip Jurccek, Simon Keizer, Blaise Thomson, Kai Yu and Steve Young, Phrase-based statistical languagegeneration using graphical models and active learning, In Proceedings of the 48th ACL, ACL 10, 2010.[16] Gabor Angeli, Percy Liang, and Dan Klein, A simple domainindependent probabilistic approach to generation, In Proceedings of the 2010Conference on EMNLP, EMNLP 10, Association for Computational Linguistics, 2010.[17] Ravi Kondadadi, Blake Howald, and Frank Schilder, A statistical nlg framework for aggregated planning and realization In Proceedings of the51st Annual Meeting of the ACL, Association for Computational Linguistics, 2013.[18] Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell and Kate Saenko, Sequence to Sequence Videoto Text, arXiv:1505.00487 [cs.CV] or arXiv:1505.00487v3 [cs.CV] 19 Oct. 2015.[19] Hochreiter, Sepp, and Jrgen Schmidhuber. Long short-term memory. Neural computation 9.8 (1997): 1735-1780.[20] Adventuresinmachinelearning.com, Keras LSTM tutorial How to easily build a powerful deep learning language model, 2018. [Online]. Avail-able: http://www.adventuresinmachinelearning.com/keras-lstm-tutorial/ . [Accessed: 14- Aug- 2018].[21] Colah.github.io, Understanding LSTM Networks, 2015. [Online]. Available: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ .[Accessed: 14- Aug- 2018].[22] Islam, Sanzidul, et al. ”A Potent Model to Recognize Bangla Sign Language Digits Using Convolutional Neural Network.” Procedia computerscience 143 (2018): 611-618.[23] Islam, Md Sanzidul, et al. ”Ishara-Lipi: The First Complete MultipurposeOpen Access Dataset of Isolated Characters for Bangla Sign Lan-guage.” 2018 International Conference on Bangla Speech and</s>
<s>Language Processing (ICBSLP). IEEE, 2018.</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/308867642Design and implementation of an efficient DeConverter for generating Banglasentences from UNL expressionConference Paper · June 2015DOI: 10.1109/ICIEV.2015.7334006CITATIONSREADS4 authors:Some of the authors of this publication are also working on these related projects:COVID-19 and psychiatric health View projectData Extraction from Natural Text View projectDr. Aloke Kumar SahaUniversity of Asia Pacific27 PUBLICATIONS 54 CITATIONS SEE PROFILEM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEMolla Rashied HusseinUniversity of Asia Pacific14 PUBLICATIONS 21 CITATIONS SEE PROFILEJ. K. DasJahangirnagar University26 PUBLICATIONS 61 CITATIONS SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 26 September 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/308867642_Design_and_implementation_of_an_efficient_DeConverter_for_generating_Bangla_sentences_from_UNL_expression?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/308867642_Design_and_implementation_of_an_efficient_DeConverter_for_generating_Bangla_sentences_from_UNL_expression?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-psychiatric-health?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Data-Extraction-from-Natural-Text?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Jahangirnagar_University?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_10&_esc=publicationCoverPdfDesign and Implementation of an Efficient DeConverter for generating Bangla Sentences from UNL Expression Aloke Kumar Saha Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh aloke71@yahoo.com Md. Firoz Mridha Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh mdfirozm@yahoo.com Molla Rashied Hussein Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh mrh.cse@uap-bd.edu Jugal Krishna Das Computer Science & Engineering Jahangirnagar University Savar, Dhaka, Bangladesh cedas@juniv.edu Abstract—In this paper, the design and implementation of Bangla DeConverter for DeConverting Universal Networking Language (UNL) expressions into the Bangla Language is propounded. The UNL is an Artificial Language, which not only facilitates the translation stratagem between all the Natural Languages across the world, but also proffers the unification of those Natural Languages as well. DeConverter is the core software contrivance in a UNL system. The paper also focuses on the Linguistic Analysis of Bangla Language for the DeConversion process. A set of DeConversion rules have been burgeoned for converting UNL expression to Bangla. Experimental result shows that these rules successfully generate correct Bangla text from UNL expressions. These rules can currently produce basic and simple Bangla sentences; however, it is being aggrandized to superintend advanced and complex sentences. Keywords- DeConverter, EnConverter, Machine Translation, UNL. I. INTRODUCTION In this era of Information Technology (IT), World Wide Web (WWW) has become the nucleus of essential information. However, a large amount of resources is still beyond the reach of a significant portion of society just because of the man-made Language Barrier. There is a great need to translate digital contents which include but not limited to Websites, Blogs, Online News Portal, E-books, E-Journals and E-mails into the Native Language for overwhelming that Language Barrier. In this multilingual milieu, Machine Translation (MT) is considered as an important tool to unshackle the cordoned mankind. UNL based MT (developed with an interlingua-based approach) is also an effort in this approach. UNL program was primarily launched b a c k in 1996 in the Institute of Advanced Studies (IAS) of United Nations University (UNU), Tokyo, Japan [1], and it is currently supported by the Universal Networking Digital Language (UNDL) Foundation, an autonomous organization founded as an extension of that UNL program afterwards in 2001 with a Head-Quarter (HQ) situated at Geneva, Switzerland [2]. The approach in UNL</s>
<s>pertains to the development of the EnConverter and the DeConverter for a Natural Language. EnConverter is used to convert a given sentence into a Natural Language to an equivalent UNL expression, and DeConverter is used to do the vice versa, i.e. to convert a given UNL expression to an equivalent Natural Language sentence. A UNL system has the potential to knock down Language Barriers across the world with the development of optimal 2n components, wh e reas the traditional approaches require shoddy n(n−1) components, where n is the number of L anguages. In this paper, design and development of a Bangla DeConverter has been accorded by accentuating the DeConversion rules and Semantic ambiguity of the Bangla DeConverter. Syntactic alignment is the process of defining arrangements of words in target output. This phase plays a vital role in the accuracy of the generation process. II. UNL SYSTEM AND ITS STRUCTURE The UNL system consists of two core tools, namely EnConverter and DeConverter, which are used for the particular Natural Language Processing (NLP), a major branch of Artificial Intelligence (AI). The process of converting a source Language, i.e. Natural Language expression into the desired UNL expression is referred to as EnConversion, and the process of converting a UNL expression into a target or destination L a n g u a g e , i.e. the desired N a t i v e L a n g u a g e e x p r e s s i o n i s referred to as DeConversion. The EnConverter and DeConverter for a Language form a Language Server that may reside inside the Internet. Both the EnConverter and the DeConverter perform their functions on the basis of a set of Grammar rules and a Word Dictionary of Native Language. UNL representation consists of UNL relations, UNL Attributes (UAs) and Universal Words (UWs). UWs are represented by their English equivalents. These words are listed in the Universal Word Lexicon of UNL knowledge base [6]. Relations are the building blocks of UNL sentences. The relations between the words are drawn from a set of predefined relations [3, 4, 7, 8, 9, 10]. The attribute labels are attached with UWs to provide additional information like Tense, Numbers etc. For example, “কিরম কলা খায়” in English “Karim eats banana” can be represented into UNL expression as: {unl} agt(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,karim(icl>name>abstract_thing,com>male,nam<person)) obj(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,banana(icl>herb>thing)) {/unl} Here, it should be noted that agt is the UNL relation which indicates “a thing which initiates an action”; obj is another UNL relation which indicates “a thing in focus which is directly affected by an event”; @entry and @present are UNL attributes which indicate the main Verb and Tense information; and @sg is UNL attribute which indicates the Number information. III. HOW DECONVERTER WORKS The DeConverter is a Language Independent Generator (LIG), which provides a Framework for Syntactic and Morphological generation of Native Language. It can convert UNL Expressions into Natural Languages using corresponding Word Dictionaries and sets of DeConversion Rules for converting to the desired</s>
<s>Native i.e. Target Languages. A Word Dictionary contains the Information of Words, which correspond to UWs that are included in the UNL Expressions input and the Grammatical Attributes (GAs), which describe the behaviors of the Words. DeConversion Rules describe how to construct a Sentence using the Information from the UNL Expressions input and define in a Word Dictionary. The DeConverter converts UNL Expressions into Sentences of a Target Language following the descriptions of the Generation Rules. The UNL Ontology is also helpful when no corresponding Word for a particular UW exist in that Language. In this case, the DeConverter consults to the UNL Ontology and tries to find a more general UW, of which a corresponding Word exists in its word dictionary and consequently use the word of the upper UW to generate the Target Sentence. The DeConverter works in the following way. First, it transforms the input of a UNL expression, a set of binary relations, into a Directed Graph (DG) structure with Hyper-Nodes called Node-Net. The Root Node of a Node-Net is called Entry Node and represents the Head (e.g. the main Verb) of a Sentence. DeConversion of a UNL Expression is carried out by applying Generation Rules to the Nodes of Node-Net. It starts from the Entry Node, to find an appropriate Word for each Node and generate a Word sequence (a list of words in grammatical order) of the Target Language. In this process, the Syntactic structure is determined by applying Syntactic Rules, and Morphemes are similarly generated by applying Morphological Rules. The DeConversion process ends when all words for all Nodes are found and a Word sequence of the Target Sentence is completed. Fig. 1 shows the structure of the DeConveter. “G” indicates Generation Windows, and “C” indicates Condition Windows of the DeConverter. The DeConverter operates on the Node-List through Generation Windows. Condition windows are used to check conditions when applying a Rule. In the initial stage, on the contrary to the EnConveter [5], the Entry Node of a UNL Expression exists in the Node-List. At the end of DeConversion, the Node-List is the List of all Morphemes, with each as a Node, that are converted from the Node-Net and constitute the Target Sentence [6]. Figure 1. DeConverter structure IV. DESIGN AND IMPLEMENTATION OF BANGLA DECONVERTER DeConverter makes use of Language-Independent (LI) and Language-Dependent (LD) components during the generation process [11]. The first stage of DeConverter is the UNL parser, which parses an UNL expression input and builds a Node-Net from that UNL expression input. During the Lexeme selection stage, Bangla Root Words and their Dictionary Attributes are selected for the given UWs in that UNL expression input from the Bangla-UW Dictionary. After that, Nodes are ready for generation of Morphology according to the Target Language in the Morphology phase. In this stage, the Root Words may be changed; i.e., something can be added or removed to obtain the complete sense of Words. The system makes use of Morphological Rules for this purpose. In the</s>
<s>Function Word insertion phase, Function Words or Case Markers, such as িট, টা, খানা, খািন, র, eর, য়, েত, েথেক, েচেয় etc. are inserted to the Morphed Words. These Function Words are inserted in a generated Sentence, based on the Rule-based design in this situation [12]. Finally, the Syntactic Linearization phase is used to define Word order in the generated Sentence, so that the output matches to a Natural Language Sentence [13]. Working of Bangla DeConverter is illustrated with an example Sentence given below: Bangla Sentence: েছেলরা মােঠ ফুটবল েখেল। Transliterated Sentence: Chelara mathe football khele. Equivalent English Sentence: Boys play football in the field. The UNL expression for example Sentence is given below: {unl} agt(play(icl>compete>do,agt>thing).@entry.@present,boy(icl>child>person.@pl) man(play(icl>compete>do,agt>thing).@entry.@present,football(icl>field_game>thing)) plc(football(icl>field_game>thing),field(icl>tract>thing).@def) {/unl} To convert UNL expression to the Bangla Natural Language Sentence, Bangla DeConverter is used. The UNL expression acts as input for the Bangla DeConverter [14]. The UNL parser checks the input UNL expression for errors and generates the Node-Net. The Lexeme selection phase populates the Node-List with the equivalent Bangla Words for the UWs given in the UNL expression input. The populated Node-List is given below: Node1: Bangla word: েখেল; UW:play(icl>compete>do,agt>thing).@entry.@present Node2: Bangla word: েছেলরা; UW: boy(icl>child>person.@pl) Node3: Bangla word: ফুটবল; UW: football(icl>field_game>thing Node4: Bangla word: মােঠ; UW: field(icl>tract>thing).@def) In the Morphology phase, Morphological Rules are applied to modify Bangla Words stored in the Nodes according to UNL Attributes given in the UNL expression input and Dictionary Attributes retrieved from the Bangla-UW Dictionary [14]. The Nodes are processed by the Morphological Rules. It is evident that, in the Morphology phase, েখল ‘play’ is changed to েখলা ‘played’ and েছেল ‘boy’ is changed to েছেলরা by Morphological Rules. The Function Word insertion phase inserts Function Words in the Morphed Lexicon [15]. Nodes processed by the Function Word insertion phase are given below: In this phase, Case Markers রা and e ‘in’ are added to Node2 and Node4, respectively, according to the Function Word Rule-based insertion. In the Syntactic Linearization phase, one traverses the Nodes in a specific sequence based on the Syntactic Rule-based Linearization for Bangla Language [16]. The sequence for processing of Nodes and the Bangla Sentence generated by this sequence is given below: Node2 Node4 Node3 Node1 েছেলরা মােঠ ফুটবল েখেল। It is evident from the generated Bangla Sentence that the system is able to convert an UNL expression input into a Bangla Natural Language Sentence successfully. The descriptions of different phases of the Bangla DeConverter are given in the following segment. A. Morphology generation The System makes use of Generation Rules during this process. These Generation Rules are designed on the basis of Bangla Morphological analysis. There are three Categories of Morphology that have been identified for the purpose of converting a UNL expression to equivalent Bangla Language Sentences. They are: i) Attribute Label Resolution Morphology, ii) Relation Label Resolution Morphology, and iii) Noun, Adjective, Pronoun, and Verb Morphology. Among these three, major two Morphologies are discussed in details as follow: i) Attribute Label Resolution Morphology deals with generation</s>
<s>of Bangla Words on the basis of UNL attributes attached to a Node and its Grammatical Attributes retrieved from Lexicon. The Root Words retrieved from Bangla-UW dictionary are modified in this phase, depending on their Gender, Number, Person, Tense, Aspect, Modality, and Vowel ending Information. ii) Relation Label Resolution Morphology manages the Prepositions in English or Postpositions in Bangla, because Prepositions in English are similar to Postpositions in Bangla. These link Noun, Pronoun, and Phrases to other parts of the sentence. Some insertion of Function Words in generated output depends upon UNL Relation and Conditions imposed on Parent and Child nodes’ Attributes in a Relation. A Rule Base has been prepared for this purpose. For each of 46 UNL Relations, different Function Words are used depending upon the grammatical details of a Target Language [17]. This Rule Base consists of nine Columns. The Attributes whose absence needs to be asserted on the Child Node for firing of the rule are stored. If there is more than one Attribute that needs to be asserted on a given Node for firing of a rule, then they are stored in the Rule Base with the separation of ‘#’ sign. Here, Attributes represent UNL Attributes (obtained from a given UNL expression) or Lexical Attributes (obtained from the Bangla-UW Dictionary) of a Node. The Rule Base for Function Word insertion is illustrated with an example Rule given below: agt:null:null:null::@present#V:VINT#@progress#েখল:N#3rd:1st#2nd Where ‘agt’ is a UNL relation under consideration, and firing of the given rule will result into insertion of Function Word ে◌ following the Child Node in the generated output, because the Function Word appears in the Fifth Column and the Second, Third, and Fourth Columns contain ‘null’ in the Rule. The Sixth Column contains ‘@present#V’, which means that the Rule will be fired if the parent of ‘agt’ relation contains ‘@present’ as its UNL Attribute in the given UNL expression input and has a ‘V’ as its Lexical Attribute in Bangla-UW Dictionary. The Seventh Column contains ‘VINT#@ progress#েখল’ which refers to the Attributes whose absence needs to be asserted on the Parent Node for firing of the Rule. It means that the Parent Node should not contain ‘VINT’ (Intransitive Verb), ‘েখল’ (‘play’ Verb) Attributes in the Lexicon or the ‘@progress’ Attribute in the Parent Node of UNL expression. The Eighth Column of the Rule contains ‘N#3rd’ which refers to the Attribute whose presence needs to be asserted on the Child Node for firing of the Rule; i.e., the Child should have an ‘N’ (Noun) and ‘3rd’ (Third Person) attribute in the Bangla-UW Dictionary. The Ninth Column contains ‘1st# 2nd’ which refers to the Attribute whose absence needs to be asserted on the Child Node for firing of the Rule. It means that the Child Node should not refer to the First Person or the Second Person in the Sentence [18]. Thus, if the relation ‘agt’ has a Parent Node with an ‘@present’ and ‘V’ Attribute, without ‘VINT’, ‘েখল’, ‘@progress’, or ‘@custom’ Attribute, or has a</s>
<s>Child Node with an ‘N’ and ‘3rd’ Attribute and without a ‘1st’ or ‘2nd’ Attribute, then Function Word ে◌ will be inserted following the Child Node in generated output [19]. For example, in UNL relation ‘agt(play(agt> human, obj>game). @present.@entry, boy (icl> maleperson))’ of UNL expression, the Parent Node of Relation ‘agt’ is ‘play(agt>human, obj>game)’ having ‘V’ and ‘@past’ Attribute and without the ‘VINT’ Attribute in the Lexicon. The Child Node of ‘agt’ relation is ‘boy(icl>male child)’ that has ‘N’ and ‘3rd’ Attribute and does not have ‘1st’ and ‘2nd’ Attributes in the Lexicon. As a result, the firing of Rule will occur and thus the generation of Function Word ে◌ followed by Child Node ‘boy(icl> male child)’ will be in the generated output as below: V. EXPERIMENTAL RESULT AND TESSTING SYSTEM The System has been tested on several UNL Expressions. It has been observed that the System successfully deals with the resolution of UNL Relations and generates Attributes for those Sentences. The System has been tested with the help of UNL Expressions available in the Russian UNL Language Server. The given English sentences were manually translated at Russian Language Server into equivalent UNL Expressions and then those equivalent UNL Expressions were placed into the proposed UNL-Bangla DeConverter mechanism. Comparative analysis is presented in Table 1 for 5 (Five) Sentences. Accuracy will arise with more tested Sentences and appending Rules. The GUI of Bangla DeConverter is classified into the following three Windows: (1) Bangla Testing Server (2) DeConversion (3) Intermediate output And Figure 2 shows the Bangla DeConverter Input and Figure 3.shows the Bangla DeConverter Output which is generated by the proposed Bangla DeConverter Figure 2. Bangla DeConverter Input Figure 3. Bangla DeConverter Output Sl. UNL Expressions generated by the Russian UNL language server Relations ResolvedEquivalent English Sentence Bangla Sentences generated by DeConverter {unl} agt(read(icl>see>do,agt>person,obj>information).@entry.@present.@progress,kerim(icl>name>abstract_thing,com>male,nam<person)){/unl} agt Karim is reading. “কিরম পিড়েতেছ” Karim Poritechhe. 2. {unl} agt(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,i(icl>person)) obj(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,rice(icl>grain>thing)) {/unl} agt ,obj I eat rice. “আিম ভাত খাi” Aami vat khai. 3. {unl} agt(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,he(icl>person)) obj(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,note(icl>personal_letter>thing).@indef) ins(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,pen(icl>writing_implement>thing).@indef) {/unl} agt, ins, obj He wrote a Note with a pen. “েস কলম িদেয় eকিট েনাট িলেখিছল” Se kolom die ekti note likhechhilo. 4. {unl} obj(fly(icl>move>occur,equ>wing,com>air,plt>thing,plf>thing,obj>concrete_thing,plc>thing,ins>thing).@entry.@present,bird(icl>vertebrate>thing).@def) plf(fly(icl>move>occur,equ>wing,com>air,plt>thing,plf>thing,obj>concrete_thing,plc>thing,ins>thing).@entry.@present,nest(icl>retreat>thing).@def) {/unl} agt, frm, obj The bird flies from the nest. “পািখিট বাসা েথেক uেড় যায়” Pakhiti basha theke ure jae. 5. {unl} aoj(live(icl>be,com>style,aoj>person,man>uw).@entry.@present,we(icl>group).@pl) plc(live(icl>be,com>style,aoj>person,man>uw).@entry.@present,dhak{/unl} aoj, plc We live in Dhaka. “আমরা ঢাকায় থািক” Amra dhakae thaki. Table I. Bangla Sentences generated by the DeConverter with their corresponding UNL expressions Input VI. CONCLUSION AND FUTURE WORK In this paper, a Rule-Based Bangla DeConverter have been proffered. These Rules can currently convert simple UNL Expressions to Bangla Sentences. It is being aggrandized to superintend advanced and complex Sentences. The proposed System has been tested for more than 2000 UNL Expressions. This System achieved accuracy of as good as 89%, which can be marked outstanding in this Field of Study. Moreover, a Web interface has been designed for online DeConversion of the UNL expression to the corresponding Bangla Sentence. It empowers the Bangla</s>
<s>Readers to read the sentences in their Local Language, even though those sentences were written initially in a different Language, by having converted through their equivalent UNL expressions presented on the Web. This System will also provide an opportunity for the Researchers to work on MT to explore and expand the UNL beyond its limit to construct the Interlingua Utopia, where Language will no longer be a obstacle for Mankind. Knowledge should not be contained in a jar, rather be let diffuse in an open atmosphere. REFERENCES [1] UNL System. Website Link: http://www.unl.ru/system.html. Date of last retrieval: March 14, 2015. [2] Universal Networking Language Portal. Website link: http://www.undl.org. Date of last retrieval: March 14, 2015. [3] H. Uchida, M Zhu. T.G. Della Senta. Universal Networking Language, 2005/6-UNDL Foundation, International Environment House. [4] H. Uchida, M. Zhu. The Universal Networking Language (UNL) Specification Version 3.0 Edition 3 , Technical Report, UNU, Tokyo:, 2005/6-UNDL Foundation, International Environment House, 2004 [5] EnConverter Specification, Version 3.3, UNL Center/UNDL Foundation, Tokyo 150-8304, Japan 2002 [6] DeConverter Specification, Version 2.7, UNL Center, UNDL Foundation, Tokyo 150-8304, Japan 2002 [7] D.M. Shahidullah. Bangla Baykaron, Dhaka: Ahmed Mahmudul Haque of Mowla Brothers prokashani, 2003 [8] D. C. Shuniti Kumar. Bhasha-Prakash Bangala Vyakaran, Calcutta : Rupa and Company Prokashoni, July 1999, pp.170-175 [9] Humayun Azad. Bakkotottyo - Second edition, Dhaka: Bangla Academy Publishers, 1994 [10] D. S. Rameswar. Shadharan Vasha Biggan and Bangla Vasha, Pustok Biponi Prokashoni, November 1996, pp. 358-377. [11] M.N.Y. Ali, J.K. Das, S. M. Abdullah Al Mamun, M. E. H. Choudhury, “Specific Features of a Converter of Web Documents from Bengali to Universal Networking Language”, International Conference on Computer and Communication Engineering 2008 (ICCCE’08), Kuala Lumpur, Malaysia, pp. 726-731. [12] M.N.Y. Ali, J.K. Das, S.M. Abdullah Al Mamun, A. M. Nurannabi. “Morphological Analysis of Bangla words for Universal Networking Language”, International Conference on Digital Information Management (ICDIM), 2008, London, England, pp. 532-537 [13] M.N.Y.Ali, A. M. Nurannabi, G. F. Ahmed, J.K. Das. “Conversion of Bangla Sentence for Universal Networking Language”, International Conference on Computer and Information Technology (ICCIT), Dhaka, 2010 pp. 108-113 [14] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Dictionary Entries for Bangla Consonant Ended Roots in Universal Networking Language”, International Journal of Computational Linguistics (IJCL), Volume (3) : Issue (1) :2012, pp. 79-87. [15] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Outlining Bangla Word Dictionary for Universal Networking Language”, IOSR Journal of Computer Engineering (IOSRJCE), ISSN: 2278-0661, Sep-Oct. 2012. [16] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Development of Dictionary Entries for the Bangla Vowel Ended Roots for Universal Networking Language”, International Journal of Computer Applications (IJCA), 52(19), August 2012. Published by Foundation of Computer Science, New York, USA, pp 38-45. [17] Aloke Kumar Saha Muhammad F. Mridha and Jugal Krishna Das, “Analysis of Bangla Root Word for Universal Networking Language (UNL)”, International Journal of Computer Applications (IJCA) (0975 – 8887), Volume 89 – No.17, March 2014. [18] Muhammad F. Mridha, Aloke Kumar Saha and Jugal Krishna Das, “New Approach</s>
<s>of Solving Semantic Ambiguity Problem of Bangla Root Words Using Universal Networking Language (UNL)”, 3rd International Conference on Informatics, Electronics & Vision(ICIEV), 23-24 May, 2014. [19] Muhammad F. Mridha, Aloke Kumar Saha, Mahadi hasan and Jugal Krishna Das, “Solving Semantic Problem of Phrases in NLP Using Universal Networking Language (UNL)” International Journal of Advanced Computer Science and Applications (IJACSA), Special Issue on Natural Language Processing(NLP) 2014. View publication statsView publication statshttps://www.researchgate.net/publication/308867642 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSansMM /AdobeSerifMM /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellGothicStd-Black /BellGothicStd-Bold /BellGothicStd-Light /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10</s>
<s>/CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EuroSig /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KozGoProVI-Medium /KozMinProVI-Regular /KristenITC-Regular /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LetterGothicStd /LetterGothicStd-Bold /LetterGothicStd-BoldSlanted /LetterGothicStd-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /MinionPro-Semibold /MinionPro-SemiboldIt /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9</s>
<s>/MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /MyriadPro-Black /MyriadPro-BlackIt /MyriadPro-Bold /MyriadPro-BoldIt /MyriadPro-It /MyriadPro-Light /MyriadPro-LightIt /MyriadPro-Regular /MyriadPro-Semibold /MyriadPro-SemiboldIt /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomDGR-Bold /NimbusRomDGR-BoldItal /NimbusRomDGR-Regu /NimbusRomDGR-ReguItal /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600</s>
<s>/MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440639063106360020063906440649002006270644063406270634062900200648064506460020062E06440627064400200631063306270626064400200627064406280631064A062F002006270644062506440643062A063106480646064A00200648064506460020062E064406270644002006350641062D0627062A0020062706440648064A0628061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043f043e043a0430043704320430043d04350020043d043000200435043a04400430043d0430002c00200435043b0435043a04420440043e043d043d04300020043f043e044904300020043800200418043d044204350440043d04350442002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e5c4f5e55663e793a3001901a8fc775355b5090ae4ef653d190014ee553ca901a8fc756e072797f5153d15e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc87a25e55986f793a3001901a904e96fb5b5090f54ef650b390014ee553ca57287db2969b7db28def4e0a767c5e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020007a006f006200720061007a006f007600e1006e00ed0020006e00610020006f006200720061007a006f007600630065002c00200070006f007300ed006c00e1006e00ed00200065002d006d00610069006c0065006d00200061002000700072006f00200069006e007400650072006e00650074002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c00200073006b00e60072006d007600690073006e0069006e0067002c00200065002d006d00610069006c0020006f006700200069006e007400650072006e00650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200064006900650020006600fc00720020006400690065002000420069006c006400730063006800690072006d0061006e007a0065006900670065002c00200045002d004d00610069006c0020006f006400650072002000640061007300200049006e007400650072006e00650074002000760065007200770065006e006400650074002000770065007200640065006e00200073006f006c006c0065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e00200065006e002000700061006e00740061006c006c0061002c00200063006f007200720065006f00200065006c006500630074007200f3006e00690063006f0020006500200049006e007400650072006e00650074002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e400740074006500690064002000730065006c006c0069007300740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002c0020006d0069007300200073006f006200690076006100640020006b00f500690067006500200070006100720065006d0069006e006900200065006b007200610061006e0069006c0020006b007500760061006d006900730065006b0073002c00200065002d0070006f0073007400690067006100200073006100610074006d006900730065006b00730020006a006100200049006e007400650072006e00650074006900730020006100760061006c00640061006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000640065007300740069006e00e90073002000e000200049006e007400650072006e00650074002c002000e0002000ea007400720065002000610066006600690063006800e90073002000e00020006c002700e9006300720061006e002000650074002000e0002000ea00740072006500200065006e0076006f007900e9007300200070006100720020006d006500730073006100670065007200690065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003b103c103bf03c503c303af03b103c303b7002003c303c403b703bd002003bf03b803cc03bd03b7002c002003b303b903b100200065002d006d00610069006c002c002003ba03b103b9002003b303b903b1002003c403bf0020039403b903b1002d03b403af03ba03c403c503bf002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05EA05E605D505D205EA002005DE05E105DA002C002005D305D505D005E8002005D005DC05E705D805E805D505E005D9002005D505D405D005D905E005D805E805E005D8002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV <FEFF005a00610020007300740076006100720061006e006a0065002000500044004600200064006f006b0075006d0065006e0061007400610020006e0061006a0070006f0067006f0064006e0069006a006900680020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f0161007400690020006900200049006e007400650072006e0065007400750020006b006f00720069007300740069007400650020006f0076006500200070006f0073007400610076006b0065002e00200020005300740076006f00720065006e0069002000500044004600200064006f006b0075006d0065006e007400690020006d006f006700750020007300650020006f00740076006f00720069007400690020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006b00610073006e0069006a0069006d0020007600650072007a0069006a0061006d0061002e> /HUN <FEFF00410020006b00e9007000650072006e00790151006e0020006d00650067006a0065006c0065006e00ed007400e9007300680065007a002c00200065002d006d00610069006c002000fc007a0065006e006500740065006b00620065006e002000e90073002000200049006e007400650072006e006500740065006e0020006800610073007a006e00e1006c00610074006e0061006b0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f9002000610064006100740074006900200070006500720020006c0061002000760069007300750061006c0069007a007a0061007a0069006f006e0065002000730075002000730063006800650072006d006f002c0020006c006100200070006f00730074006100200065006c0065007400740072006f006e0069006300610020006500200049006e007400650072006e00650074002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF753b97624e0a3067306e8868793a3001307e305f306f96fb5b5030e130fc30eb308430a430f330bf30fc30cd30c330c87d4c7531306790014fe13059308b305f3081306e002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b9069305730663044307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c306a308f305a300130d530a130a430eb30b530a430ba306f67005c0f9650306b306a308a307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020d654ba740020d45cc2dc002c0020c804c7900020ba54c77c002c0020c778d130b137c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b00790074006900200072006f006400790074006900200065006b00720061006e0065002c00200065006c002e002000700061016100740075006900200061007200200069006e007400650072006e0065007400750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f007400690020007201010064012b01610061006e0061006900200065006b00720101006e0101002c00200065002d00700061007300740061006d00200075006e00200069006e007400650072006e006500740061006d002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f007200200073006b006a00650072006d007600690073006e0069006e0067002c00200065002d0070006f007300740020006f006700200049006e007400650072006e006500740074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079015b0077006900650074006c0061006e006900610020006e006100200065006b00720061006e00690065002c0020007700790073007901420061006e0069006100200070006f0063007a0074010500200065006c0065006b00740072006f006e00690063007a006e01050020006f00720061007a00200064006c006100200069006e007400650072006e006500740075002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200065007800690062006900e700e3006f0020006e0061002000740065006c0061002c0020007000610072006100200065002d006d00610069006c007300200065002000700061007200610020006100200049006e007400650072006e00650074002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020006100660069015f006100720065006100200070006500200065006300720061006e002c0020007400720069006d0069007400650072006500610020007000720069006e00200065002d006d00610069006c0020015f0069002000700065006e00740072007500200049006e007400650072006e00650074002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f0020044d043a04400430043d043d043e0433043e0020043f0440043e0441043c043e044204400430002c0020043f0435044004350441044b043b043a04380020043f043e0020044d043b0435043a04420440043e043d043d043e04390020043f043e044704420435002004380020044004300437043c043504490435043d0438044f0020043200200418043d044204350440043d043504420435002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020007a006f006200720061007a006f00760061006e006900650020006e00610020006f006200720061007a006f0076006b0065002c00200070006f007300690065006c0061006e0069006500200065002d006d00610069006c006f006d002000610020006e006100200049006e007400650072006e00650074002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f01610074006f00200069006e00200069006e007400650072006e00650074002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e40020006e00e40079007400f60073007400e40020006c0075006b0065006d0069007300650065006e002c0020007300e40068006b00f60070006f0073007400690069006e0020006a006100200049006e007400650072006e0065007400690069006e0020007400610072006b006f006900740065007400740075006a0061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f6007200200061007400740020007600690073006100730020007000e500200073006b00e40072006d002c0020006900200065002d0070006f007300740020006f006300680020007000e500200049006e007400650072006e00650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF0045006b00720061006e002000fc0073007400fc0020006700f6007200fc006e00fc006d00fc002c00200065002d0070006f00730074006100200076006500200069006e007400650072006e006500740020006900e70069006e00200065006e00200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f0062006100740020007600650020004100630072006f006200610074002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a0456043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043f0435044004350433043b044f043404430020043700200435043a04400430043d044300200442043000200406043d044204350440043d043504420443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Bangla Natural Language InterfaceA Framework for Building a Natural Language Interface for BanglaYeasin Ar Rahman, Mahtabul Alam Sohan, Khalid Ibn Zinnah, Mohammed Moshiul Hoque Chittagong University of Engineering & Technology Chittagong-4349, Bangladesh nibir201188@gmail.com, mahtabul1993@gmail.com, khalidex@yahoo.com, moshiul_240@cuet.ac.bd Abstract— Mobile Computing Devices are enabling connection between people and the Internet, largest source of information in the world. However to properly utilize this knowledge in these devices most people need a Natural Language Interface. Siri, Google Now, Cortana are examples of such interfaces. Because Bangla is a low resource language building such interface is very difficult and time consuming. However due to the increasing numbers of smart phone and smart device users in the Bangla speaking regions, application developers are facing the need for such interfaces to provide web services effectively. This paper addresses this issue and gives an empirical framework on how to build a feasible Natural Language Interface in Bangla and similar low resource languages. Keywords— Bangla Natural Language Interface; Human Computer Interaction; Bangla Speech to Text; Bangla Language Processing; Artifical Intelligence. I. INTRODUCTION Language is used to communication. There are several methods for communicating. They are verbal, written and visual (sign language, body movement, nod, gesture etc.). Each of the form of communication is very unique that it is very difficult for computers to understand the meaning. However after the introduction of smart mobile devices such as smart phones, wearable devices such as smart watch, smart band etc. and Internet of things traditional user interface such as GUI (graphical user interface) and CLI (command line interface) are no longer a viable option for effective use of human time [1]. In many cases these devices don’t have traditional input-output devices such as keyboard, mouse or display. In order to give pleasant user experience Natural Language Interface (NLI) or generally Voice User Interface (VUI) are very practical and somewhat necessary approach in these devices. NLI generally has an artificial intelligence unit so it can successfully distinguish between different commands of the user and then confidently perform the task which the user intended. The process utilizes Automated Speech Recognition and Natural Language Understanding. Generally this kind of artificial intelligence program or service is called an Intelligent Personal Assistant (or simply IPA) which can perform tasks or services for a user [2]. In a NLI linguistic events such as verbs, phrases and clauses act as User Interface (UI) controls for searching, creating, selecting and modifying data in software applications. In interface design NLIs are sought after for their speed and ease of use, but most suffer the challenges to understanding wide varieties of ambiguous input [5]. Processing a language consists of some important steps. They are sequentially Morphological, Syntax and Semantic analysis. For a Natural Language Interface Part of Speech Tagging, Named Entity Recognition, Intent analysis are important fields for achieving desired result. Using machine learning methods the state of the art systems has achieved striking results [3]. However one of the key reason for these success were large amount of annotated data. For</s>
<s>most of the languages in the world such data is not available. Over 250 million people use Bangla as their medium of communication. Currently there are 60 million internet users in Bangladesh [23]. Most of these users use internet through mobile devices. Currently web application developers and mobile app developers do not have access to natural language processing technologies such automatic speech recognition, text to speech, natural language searching, optical character recognition etc. in Bangla. Since most of the general population in Bangla speaking regions are not proficient in English it is not possible to provide satisfactory service through mobile devices without these natural language services. On the other hand most of the applications and services currently available in this region cannot be fine-tuned for the regional preferences without these natural language services. Developing a Natural Language Interface is important for Bangla because it facilitates easier search for information, enables automation of home and industry, makes communication with robots possible, enables navigation better and easier, allows people with disabilities to easily communicate with people and devices. Most importantly it removes the language barrier to enter internet and thus allows the inclusion of the population in villages to take part in the digital world. Through a robust NLI it is possible to effectively provide digital services to rural and urban population. There is very little data available for Bangla Language Research. So it is very difficult to develop a Natural Language Interface. In this work we address this issue and provide a standardized guideline and an empirical framework for building a Bangla natural language interface. Although there is a lack of available data, we believe if we effectively use the data available in Bangla in the internet, it is possible to build a feasible system which can provide natural language services to application 978-1-5090-5627-9/17/$31.00 ©2017 IEEE International Conference on Electrical, Computer and Communication Engineering (ECCE), February 16-18, 2017, Cox’s Bazar, Bangladesh935developer for providing intelligent services to the Bangla speaking population. II. BACKGROUND There have been significant research in the field and subfields (e.g. Human Computer Interaction) of artificial intelligence to build a system that can interact with human at the same level of understanding of another human. But research suggest that it is quite difficult to formulate a methodology for simulating human behavior in machine as the knowledge of understanding human behavior is still at primitive level. Practical systems are designed to perform specific tasks in specific fields. The open agent architecture (OAA) [21] is one of the most prominent framework for building multimodal interface. It delegates most of the work to individual services and thus allows a clean design. It is domain independent framework. However it requires that such services are available. But in most of the low resource languages those services do not exist and has to be integrated into the system. RADAR [22] is another notable system that allows to maintain a calendar intuitively. It was a personal assistant type system. It is task specific framework. However it</s>
<s>paved the way modern personal assistant architecture. However in the domain of intelligent personal assistant the most influential project in terms of technology was the CALO project [11] which was funded by DARPA (Defense Advanced Research Projects Agency). One of the products of this project was PAL (Personal Assistant that Learns). It provided a general guideline for building useful NLI system. These are the key aspects of the system such as learning, data management, data acquisition, controllers and user Interface. Today’s most prominent Intelligent Personal Assistants for example Siri(Apple), Cortana(Microsoft), Watson(IBM), M(Facebook), S voice(Samsung), Google Now(Google) etc. all use similar concepts for their NLI platforms. But all these systems were made for English speaking users primarily. To our knowledge there has not been any significant work to build an NLI in Bangla. However most of the science for constructing such an interface in Bangla has been explored by the researchers. Here significant works for Bengali Speech Recognition and knowledge representation and reasoning is stated. In almost all the cases NLI utilizes a speech recognition engine. The first large vocabulary continuous speech recognition system was Sphinx-II [4] developed in the Carnegie Melon University. In most of the modern general-purpose speech recognition systems Hidden Markov Model is used. HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. However the state of the art technology in speech recognition is Deep Neural Network. Using very large amount of data researchers from Microsoft, Google, IBM, Baidu, Apple, Nuance etc. companies have reached near perfect accuracy [8]. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data [9]. In case of Bangla there exists no general purpose speech recognition service. There exist no online or offline engine to covert There have been several works on Bengali speech to text systems, almost all of the studies generally emphasized on developing new algorithms rather than the implementation of an Application or Service. Hasnat et al. [12] made a customized Hidden Markov Model (HMM) based scheme for pattern classification. They also integrated the stochastic model within the scheme for Bengali speech–to-text. They used the HTK framework to make their test system. Firoze et al. [13] developed a fuzzy logic based speech recognition system and proposed that fuzzy logic has to be the base for all linguistic ambiguity-related problems in Bengali. They empirically showed that fuzzy logic results in improved response for more ambiguous linguistic entities in Bengali speech. This study is the first attempt using cepstral analysis in the artificial neural network (ANN) to recognize Bengali speech. The only large vocabulary Bengali continuous speech recognition system to our knowledge is Shruti-II developed in IIT, Kharagpur for visually impaired person [14]. Speech to text (STT) engine outputs textual output in formats specified by the developers. It can be further processed by other application processes. Natural Language Understanding part of a</s>
<s>NLI makes use of Named Entity Recognition (NER), Parts of Speech Tagging (POS Tagger), Bangla Wordnet and sentence parsing. For parts of speech tagging Maximum Entropy (ME) approach was proposed by Ekbal Asif [15]. In 2007 Hasan et el. performed a comparative study on HMM, Unigram and Brill’s method and showed that brill’s methods gives better performance [16]. Named Entity Recognition is very difficult for low resource languages like Bangla. Asif Ekbal proposed a method using support vector machine for Bangla NER in 2008[17]. Cucerzan, Silviu, and David Yarowsky showed a method for language independent named entity recognition [18]. For NLI dependency parsing is quite useful. Das, Arjun, and Arabinda Shee Utpal Garain evaluated two different dependency parsers [19]. Sankar et el. used demand satisfaction approach for dependency parsing in Bangla [20]. Most of the previous work in Bangla has one key limitation that is almost all of them were tested on a small dataset which is not suitable for a NLI. III. METHODS A NLI is a complex system. So the internal organization is divided into several parts or modules. The proposed system has the following modules Automatic Speech Recognition Unit (ASRU), Intelligent Response Unit (IRU), Knowledge Base, Question-Answering and Search Engine, Control Systems API, Text to Speech based response system. The system architecture is shown in Fig. 1. Each module in the system is important for the system to work properly. However the internal working of individual module is separate from the architecture. So the each module can be changed and improved without interfering with other components. The approaches used in different modules to perform its task is given in the following sections. A. Automatic Speech Recognition Unit The primary user interface of our system is voice based, so there is a great emphasis on voice input. The unit works the following way the user gives a command or query to the device (computer/mobile). Then the system takes that voice data and decodes it using CMU Sphinx toolkit. CMU Sphinx [10] toolkit is a HMM based large vocabulary continuous speech recognition system that supports any language if the Acoustic 936Model and Language Model are provided. Here CMU Sphinx is used because the required acoustic model and language models are easier to train for this toolkit. Also there is readily available helping applications which are necessary to train the models. After decoding the speech signal CMU Sphinx gives a text output that is then sent to IRU. The IRU then uses various natural language processing techniques to evaluate the sentence to correctly identify the intent of the user. Fig.1 System Architecture B. Intelligent Response Unit Intelligent Response Unit (IRU) is the most important component of the proposed system. The responsibility of IRU is to transform human language into actionable data. It uses several natural language processes. These components are implemented using Apache OpenNLP [24] framework. Each process is a separate component inside the system. The architecture for IRU is shown in Fig. 2. Fig. 2 Intelligent Response Unit Architecture</s>
<s>A brief explanation about the components working procedure is given below: 1) Lexical Analysis and Stemming: Takes textual input from the previous units. Convert the input into tokens using lexical analyzer which stems individual words to roots word or converts sentence into word tokens. Each token is then passed to the next section for example, Original Form: “িবরানী কাথায় সবেচেয় ভাল?” Tokenized form: “িবরানী”, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” 2) Parts of Speech Tagging: The parts of speech(POS) tagger tags each token into a parts of speech tag. It is a very important step because all the steps afterwords greatly depends on correct tagging of POS. It uses statistical MaxEnt based learning model. OpenNLP is used to train the model. Here (1) is a simple form for calculating MaxEnt. ( | ) = (∑ ( , ))∑ ( ∑ ( , )) (1) Here the symbols represent their universal meaning. 3) Named Entity Recognition: The named entity recognizer will recognize all the named entities in the text. The named entities are the objects that the system will run a command on or search for an answer. It also uses MaxEnt model. For example, Tokenized form: “িবরানী”, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” Named Entity: : <Entity Food>“িবরানী”</Entity Food>, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” 4) Sentence Dependency Parsing: The dependency parser creates a tree structure of the text that defines the relationship between different parts of the sentence. It is used for the command or query output generation. It requires a tree-bank. The dependency parsed tree for the sentence “কাজী নজ ল ইসলাম কাথায় জ হন কেরেছন?” has been given in Fig. 3. The figure is the output from the annotation application webanno[26]. Fig. 3 Example of dependency parsing 5) Keyword Based Intent Analysis: The intent analysis is based on keyword. Each keyword can be rephrased by other words given the word is very close to the word in Bangla wordnet. Here a sample of process is shown in Table I. TABLE I. INTENT ANALYSIS Intent Keyword Alternate Keywords Weather আবহাওয়া পিরেবশ, অব া Control ালাও অন কেরা An intent is dependent on the application service supported by the application developer. Intents can have multiple keywords for different processes. If two intent keyword collide then the named entities are used to resolve the collision. 6) Output Generation: After the intents has been recognized. The system creates a output string based on the 937parsed text and entities recognized. Here a sample is given in Table II. TABLE II. OUTPUT GENERATION Utterance Output Type: Command “এিস ট ােরচার ২৫ এ দাও” {intent : AC control command: Set Temp value : 25 degree place : this.room type : command} Type: Query “আজেক রােতর পিরেবশ কমন হেব?” {intent : weather day : this.today.date place : this.location time : “রাত” type : query} C. Knowledge Base Knowledge base is the physical database where all the information remains. The system will use Apache Solr [25] Based knowledge database, which is Open Source Full Text Search Engine Database. The</s>
<s>data sources are following 1) Wikipedia 2) Banglapedia 3) Newspaper, Blog and Bangla Website Crawl Data D. Question Answering and Search For Factoid question answering (QA) the system will use the general methodologies for question answering system. It uses TF/IDF [7] based algorithm for ranking the necessary documents. The weight of term i in document j can be found from the document term matrix using (2). We use a cut-off number of top 20 result to keep the calculation time minimum. wi,j = tfi,j X idfi (2) After finding the candidate passage. The QA systems task is following: 1) Question Classification: It analyzes the question using Webclopedia QA Typology [6]. All the factoid question follow a pattern. So using question classifier can understand the type of possible result. 2) Answer Type Pattern Extraction: After classifying the question the system consolidates candidate sentences. It uses the following independent methodologies [7] to rank the candidate sentences. a) Answer type match b) Pattern match c) Number of matched question keywords d) Keyword distance e) Novelty factor f) Apposition features g) Punctuation location h) Sequences of question terms If the confidence score is more than 70% for an answer that answer is shown. Otherwise top five answers and their associated document link is displayed. In case of open-domain question the system leverage it to the search engine. We use the above mentioned Apache Solr Search System. E. Control Systems APIs The Control System APIs are set of methods those are exposed in order to control a device or application programmatically. Each system has different set of control API. Any API can be logically combined to our proposed framework. The APIs takes the command output from the IRU and then performs a task specified in the command. Applications developers has to utilize the intents used in the previous steps in order to design the APIs. F. Text to Speech Confirmation After completing all the steps it is necessary to confirm the user that the task has been completed. The task can be accomplished by Text to Speech (TTS) engine. In the cases where the results are output of a query the results can be directly shown in the display. If no display device is present then it can be output by TTS engine. IV. EXPERIMENTS This section describes the process of data collection, data collection environment, tools used for collecting data and primary results. A. Data Collection For the purpose of robust speech to text conversion we tried to find out the possible utterance a user might say during an operation. As there has not been any previous attempt to make a NLI for Bangla, we constructed a Command Script for Bangla adapting similar utterances as used in other languages. The script has 250 utterances. The Domains covered in the script is given in Table III. We also found usually for most command and control utterance the vocabulary size is limited. However precision of this kind of command is more important because the commands are</s>
<s>in many cases similar and it is possible that one command might be confused with another one. The solution to this problem is to collect large amount of data to capture accurate voice properties of the users. The voices were taken from the users in the age range of 20-30 years. Data was taken mainly from the people who volunteered for the project. The data is roughly divided into 75% male and 25% female ratio. The occupation of the users are mostly students with some exception of service holders and businessmen. The education level of the volunteers vary from higher secondary to university. The voice was taken in the standard dialect of Bangla. It has been tried to minimize regional bias by taking voices from people of different districts. TABLE III. COMMAND AND SERVICE SCRIPT Domain Name Example Utterance Weather আজেকর তাপমা া কত? Food And Restaurant িবরানী কাথায় সবেচেয় ভাল? General Direction for Items (Stationary/Grocery) আেশপােশ বইখাতা কাথায় পাওয়া যােব? Directions for Landmarks বনানীেত হাসপাতাল কাথায়? Travel Related কাছাকািছ দখার মত িক আেছ? 938Price Related (General/Stock Market) গতকাল [মুরগীর] দাম কত িছল? General Query িবরািনর রিসপী বর কর। General Knowledge Question কাজী নজ ল ইসলাম কাথায় জ হন কেরেছন? Note and Alarm সকাল ৭ টায় এলাম সট কর। Communication বাসায় ফান দাও Conversion and Calculation ৪৫ ডলার সমান কত টাকা? Sports Query বাংলােদেশর ার কত? Transportation Timetable আজেক পারাবত ল কয়টায় ছাড়েব? Device Control এিস ট ােরচার ২৫ এ দাও Maps and Fair Calculation ধানমি ২৭ থেক ধানমি ১০ এর ির া ভাড়া কত? Numbers এক হাজার ইশ পাচ Program Control আমার কল রকড দখাও On the other hand in case of queries it is not practically possible to guess what a user might say. To address this issue we created another script which constitutes of various text collected from newspapers, television, novel and other domains which contains common words in everyday use. A partial list of data Sources in our query script is given in Table IV. The voice data has been collected under the following conditions: 1) Microphone: A Blue Yeti Professiona microphone is used to collect the data 2) Software: To collect the data Audacity software is used. 3) Environment: The data has been tried to collect in a natural environment in order to take the noise in the natural environment into consideration. TABLE IV. DOMAINS FOR GENERAL QUERY SCRIPT Source Name Percentage of total utterance (Rounded) Wikipedia 20% Newspaper Editorial (Prothom-Alo) 15% Blogs (Somewherein Blog, Bdnews24 Blog) 20% Websites (techtunes, techtweets) 10% News (Prothom-Alo, Bdnews24) 10% Novels (Dorojar Opashe, Parapar, Meku Kahini) 10% History (Ekattorer Dinguli) Famous Personalities (Humayun Ahmed) Famous Places (Cox’s Bazar, Jaflong) For the Knowledge database all the articles from Bangla Wikipedia has been collected and hand cleaned. It contains 240K lines, which to our knowledge is one of the largest Bangla Text Corpus. Data from popular newspapers, blogs and websites are being collected. As most of the open source nlp software do not directly support Bangla in Unicode form. We needed</s>
<s>a transliteration tool to easily port Bangla support in these software. Open-source transliteration tools for Bangla are not suitable for large amount of text as they are very slow. We developed a Fork-Join Framework based parallel processing transliteration application which speed-up our process several times. B. Results The system development is in the preliminary stage. Currently the data collection phase is going on. Several Hours of Voice data has been collected, it is being tested to see if the quality matches the expectation. The data collection is on process. Here the result found for first 2 hours of command and service script is given. It has been tested for two different models a semi-continuous model and a continuous model. Here two parameters has been used to test the data. The word error rate (WER) is the number of wrongly recognized words in every 100 words recognized, it measures accuracy of the system. On the other hand response time (RT) measures how much time the system takes to recognize a single sentence. It has been found that semi-continuous models are faster in detecting utterance and continuous models are more accurate. Here accuracy and rounded response time is given in Table V. After the data collection is complete this accuracy can change. TABLE V. WORD ERROR RATE AND RESPONSE TIME Model WER RT Continuous (800 words) 6.778 0.8s Semi-Continuous (800 words) 8.598 0.5s V. CONCLUSION The primary contribution of this work is formulating a feasible framework for low resource languages like Bangla. In this work previously done research has been proposed to implement a particular feature of the system and in cases where no such work is available it has been suggested to adapt the existing systems into Bangla. The Secondary contribution of this work is the construction and design of large datasets for different natural language processes. Our planned Bangla Speech Corpus is the largest speech corpus yet designed for Bangla as far as we know. The language model is also very large to easily use in any large scale production environment. In continuation of this work the experimental result and accuracy data will be published. The performance analysis and hardware requirements for specific tasks will be mentioned. Further research is needed for a machine learning based Question Answering engine. Also the Deep Learning can be used for speech recognition engine. REFERENCES [1] P. Maes, “Agents that reduce work and information overload,” Communications of the ACM, vol. 37, no. 7, pp. 30–40, Jul. 1994. [2] N. R. Jennings and M. Wooldridge, “Applications of Intelligent Agents,” Agent Technology, pp. 3–28, 1998. 939[3] S. J. Russell and P. Norvig, “Artificial intelligence: A modern approach,” 3rd ed., Prentice Hall, pp. 25-28 2009. [4] X. Huang, F. Alleva, H.-W. Hon, M.-Y. Hwang, K.-F. Lee, and R. Rosenfeld, “The SPHINX-II speech recognition system: an overview,” Computer Speech & Language, vol. 7, no. 2, pp. 137–148, 1993. [5] P. R. Cohen, “The role of natural language in a multimodal interface,” Proceedings of the 5th annual ACM symposium</s>
<s>on User interface software and technology - UIST '92, 1992. [6] D. Ravichandran and E. Hovy. "Learning surface text patterns for a question answering system." In Proceedings of the 40th annual meeting on association for computational linguistics, Association for Computational Linguistics, pp. 41-47, 2002. [7] D. Jurafsky and J. H. Martin, "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition," 2nd ed. United States: Pearson Prentice Hall, Ch. 23, pp. 5 -23 , 2008. [8] G. Hinton et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of Four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, Nov. 2012. [9] L. Deng and D. Yu, Deep learning: Methods and applications. Grand Rapids, MI, United States: now publishers, pp. 198–213, 2014. [10] P. Lamere et al., "Design of the CMU sphinx-4 decoder." In INTERSPEECH. 2003. [11] P. M. Berry, K. Myers, T. E. Uribe, and N. Y. Smith. "Constraint solving experience with the calo project" In Changes’ 05 International Workshop on Constraint Solving under Change and Uncertainty. 2005. [12] M. A. Hasnat, J. Mowla, and M. Khan, “Isolated and continuous bangla speech recognition: implementation, performance and application perspective,” in Center for research on Bangla language processing (CRBLP), 2007. [13] A. Firoze, M. S. Arifin, and R. M. Rahman, "Bangla user Adaptive word speech recognition," International Journal of Fuzzy System Applications, vol. 3, no. 3, pp. 1–36, 2013. [14] S. Mandal, B. Das, and P. Mitra. "Shruti-II: A vernacular speech recognition system in Bengali and an application for visually impaired community." Students' Technology Symposium, IEEE, 2010. [15] A. Ekbal, R. Haque, and S. Bandyopadhyay. "Maximum Entropy Based Bengali Part of Speech Tagging." A. Gelbukh (Ed.), Advances in Natural Language Processing and Applications, Research in Computing Science (RCS) Journal 33: 67-78, 2008. [16] F. M. Hasan, N. UzZaman, and M. Khan. "Comparison of different POS Tagging Techniques (N-Gram, HMM and Brill’s tagger) for Bangla." Advances and Innovations in Systems, Computing Sciences and Software Engineering. Springer Netherlands, 121-126, 2007. [17] A. Ekbal, and S. Bandyopadhyay. "Bengali Named Entity Recognition Using Support Vector Machine." IJCNLP. 2008. [18] S. Cucerzan, and D. Yarowsky. "Language independent named entity recognition combining morphological and contextual evidence." Proceedings of the 1999 Joint SIGDAT Conference on EMNLP and VLC. 1999. [19] A. Das, and A. S. U. Garain. "Evaluation of two Bengali dependency parsers." 24th International Conference on Computational Linguistics. 2012. [20] S. De, A. Dhar, and U. Garain. "Structure Simplification and Demand Satisfaction Approach to Dependency Parsing for Bangla." Proc. of 6th Int. Conf. on Natural Language Processing (ICON) tool contest: Indian Language Dependency Parsing. 2009. [21] A. Cheyer, and D. Martin. "The open agent architecture." Autonomous Agents and Multi-Agent Systems 4, no. 1: 143-148, 2001. [22] P. J. Modi, M. Veloso, S. F. Smith, and J. Oh. "Cmradar: A personal assistant agent for calendar management." In Agent-Oriented Information Systems II, pp. 169-181. Springer Berlin Heidelberg, 2005. [23] B. T. R. Commission, "Internet subscribers in Bangladesh</s>
<s>July, 2016," 2016. [Online]. Available: http://www.btrc.gov.bd/content/internet-subscribers-bangladesh-july-2016. Accessed: Sep. 1, 2016. [24] T. A. S. Foundation, "Apache OpenNLP," 2010. [Online]. Available: https://opennlp.apache.org/. Accessed: Sep. 6, 2016. [25] T. A. S. Foundation, "Apache Solr," 2016. [Online]. Available: http://lucene.apache.org/solr/. Accessed: Sep. 6, 2016. [26] R. E. d. Castilho, C. Biemann, I. Gurevych and S.M. Yimam, “WebAnno: a flexible, web-based annotation tool for CLARIN”. Proceedings of the CLARIN Annual Conference (CAC), 2014. 940 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste</s>
<s>impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/283882615Design and implementation of an efficient enconverter for bangla languageArticle · August 2015CITATIONSREADS4195 authors, including:Some of the authors of this publication are also working on these related projects:COVID-19 and psychiatric health View projectCOVID-19 and Public Health View projectM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEDr. Aloke Kumar SahaUniversity of Asia Pacific27 PUBLICATIONS 54 CITATIONS SEE PROFILEMd. Akhtaruzzaman AdnanThe University of Asia Pacific12 PUBLICATIONS 146 CITATIONS SEE PROFILEMolla Rashied HusseinUniversity of Asia Pacific14 PUBLICATIONS 21 CITATIONS SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 26 September 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/283882615_Design_and_implementation_of_an_efficient_enconverter_for_bangla_language?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/283882615_Design_and_implementation_of_an_efficient_enconverter_for_bangla_language?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-psychiatric-health?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-Public-Health?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/The_University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_10&_esc=publicationCoverPdf VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6543DESIGN AND IMPLEMENTATION OF AN EFFICIENT ENCONVERTER FOR BANGLA LANGUAGE M. F. Mridha1, Aloke Kumar Saha1, Md. Akhtaruzzaman Adnan1, Molla Rashied Hussein1 and Jugal Krishna Das2 1Department of Computer Science and Enginerring, University of Asia Pacific, Dhaka, Banglaesh 2Department of Computer Science and Engineering, Jahangirnagar University, Savar, Dhaka, Banglaesh E-Mail: mdfirozm@yahoo.com ABSTRACT In this paper, a distinctive approach of Machine Translation (MT) from Bangla language to Universal Networking Language (UNL) is proffered. This approach corroborates analyzing Bangla sentences more precisely. The analysis churns out a semantic net like structure expressed by means of UNL. The UNL system comprises two major components, namely, EnConverter (used for converting the text from a Native language to UNL) and DeConverter (used for converting the text from UNL to a Native language). This paper discusses the framework for designing EnConverter for Bangla language with a particular attention on generating UNL attributes and relations from Bangla Sentence input. The structural constitution of Bangla EnConverter, algorithm for understanding the Bangla sentence input and resolution of UNL relations and attributes are also conferred in this paper. The paper highlights the EnConversion analyzing rules for the EnConverter and indicates its usage in generating UNL expressions. This paper also covers the results of implementing Bangla EnConverter and compares these with the system available in a Language Server located in Russia. Keywords: en-converter, machine translation, knowledge base, natural language parsing, universal networking language. INTRODUCTION According to a story narrated in the "Book of Genesis of the Tanakh" (Hebrew Bible), everyone on Earth used to speak the same language. People there learned to make bricks and build a city with a skyscraping tower. Purpose of that skyscraper is to stay in a single building and not to be scattered over the world. Eventually, they developed diverse languages over several eras and got themselves scattered over the world, as they failed to comprehend each other’s language as well as motives. Natural Language Processing (NLP) has a potential to unite the Universe again, as per the aforementioned story, but not by the same language, rather by constructing common platform for all existing languages. UNL has been used by researchers as an</s>
<s>Interlingua approach for NLP. The World Wide Web (WWW) today has to face the complexity of dealing with multilingualism. People speak different languages and the number of natural languages along with their dialects is estimated to be close to 4000. The Universal Networking Language [1,2,3] has been introduced as a digital meta language for describing, summarizing, refining, storing and disseminating information in a machine independent and human-language-neutral form. A good number of societies over the world are lagging behind in this age of Information Technology just because of the language barrier. There is a great need to translate digital contents which include but not limited to Websites, Blogs, Online News Portals, E-books, E-Journals, E-mails into the native language for overcoming that language barrier. This paper focuses on one such technology and includes the work carried out in this direction for Bangla Language. UNL System has EnConverter and DeConverter as two important components. The EnConverter converts source language sentences into UNL expressions [4]. The DeConverter converts UNL expressions to target language sentences. With the development of EnConverter for Bangla language, the Bangla text is converted to UNL expressions. It has a potential to translate Bangla language text to any language, if that language has its own DeConverter, which can convert UNL expressions generated by Bangla EnConverter i n t o that destined language. This will certainly help to develop a multilingual machine translation system for Bangla Language. The organization of this paper is as follow: In Section 2, we describe the related works, Section 3 has the short description about UNL format for representation information, Section 4 describes design of Bangla EnConverter. Finally, Section 5 draws conclusions with some remarks on future works. RELATED WORKS In order to design a multilingual machine translation system for Bangla Language, interlingua approach is the best match as it requires only n interlingua transfer modules for n languages [5]. A transfer module of each language requires only two components: one for converti n g from source language to Interlingua and other for c on v e r t i n g Interlingua to t h e target language. We have used Universal Networking Language (UNL) as the Interlingua for this task, as the UNL representation has the right level of expressive power and granularity. UNL has 46 semantic relations and 86 attributes to express the semantic content of a sentence [5,6]. UNL has been developed and is managed by the Universal Networking Digital Language (UNDL) foundation, an independent NGO founded in 2001 and based in Geneva, Switzerland, the extension of an initial VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6544project launched by the Institute of Advanced Studies of the United Nations University, Tokyo, Japan in 1996 [7,8]. For converting Bangla sentence to UNL expressions firstly, we have gone through Universal Networking Language (UNL) [10,11,12,13] where we have learnt about UNL expression, Relations, Attributes, Universal Words, UNL Knowledge Base,</s>
<s>Knowledge Representation in UNL, Logical Expression in UNL, UNL systems and specifications of EnConverter. All these are key factors for preparing Bangla word dictionary, enconversion and deconversion rules in order to convert a Bangla language sentence to UNL expressions. Secondly, we have rigorously gone through the Bangla grammar [13,9], Morphological Analysis [14,15,16,17], construction of Bangla sentence [9] based on semantic structure. Using above references we extract ideas about Bangla grammar for morphological and semantic analysis in order to prepare Bangla word dictionary [18,19], morphological rules and enconversion rules in the format of UNL provided by the UNL center of the UNDL Foundation. UNL format for representation of information We presume a UNL representation consists of UNL relations, UNL attributes and Universal Words (UWs). UWs are represented by their English equivalents. These words are listed in the Universal Word Lexicon of UNL knowledge base [1]. Relations are the building blocks of UNL sentences. The relations between the words are drawn from a set of predefined relations [3]. The attribute labels are attached with universal words to provide additional information like tense, number etc. For example, “েস sুেল যায়” in English “se school e jai” can be represented into UNL expression as: {unl} agt(go(icl>move>do,plt>place,agt>thing):0B.@entry.@present,he(icl>person):00) plt(go(icl>move>do,plt>place,agt>thing):0B.@entry.@present,school(icl>building>thing,equ>educational_institute):03) {/unl} We can here note that agt is the UNL relation which indicates “a thing which initiates an action”; obj is another UNL relation which indicates “a thing in focus which is directly affected by an event”; @entry and @present are UNL attributes which indicate the main verb and tense information; and @sg is UNL attribute which indicates the number information. Proposed algorithm descriptions The Bangla EnConverter processes the given input sentence from left to right. It uses two types of windows [11], namely, analysis window and condition window in the processing. The currently focused analysis windows are circumscribed by condition windows as shown in Figure-1. Here, ‘A’ indicates an analysis window, ‘C’ indicates a condition window, and ‘ni’ indicates an analysis node. Bangla EnConverter architecture The architecture of Bangla EnConverter can be divided into six phases. It consists of the tasks of processing of input Bangla sentence by Bangla parser, creation of linked list of nodes on the basis of output of parser, extraction of UWs and generation of UNL expression for the input sentence. The phases in proposed Bangla EnConverter are tokenize, linked list creation, Universal Word lookup, Case marker lookup, Unknown word handling and UNL creation phase. Figure-1. A schematic of EnConverter. Tokenize phase Bangla EnConverter uses Bangla parser for tokenize the input sentence. For parsing an input Bangla sentence to produce the intermediate outputs of tokenizer, morph analyzer, part-of-speech tagger, person, number and Bivokti computation. Linked list creation phase In this phase, Bangla EnConverter constructs a linked list of nodes. This linked list is constructed on the basis of information generated by the Bangla parser, Bangla- UW dictionary and root word-modifier table. Each root word of the token and verb modifiers of the main verb act as the candidates for the node. For</s>
<s>each root word, the words obtained by combining it with root word of next consecutive tokens are searched in Bangla-UW dictionary and root word-modifier table, so that the largest token can be formed on the basis of root words stored in the Bangla- UW dictionary. After receiving the If a token formed by the concatenation of consecutive root words is found as a single entry in Bangla-UW dictionary or in root word-modifier table, then that group of words is considered as a single token and stored as a node in the linked list, otherwise, each root word of the token is considered as a single token and stored as a node in the linked list. A node in the linked list has Bangla root word attribute, Universal Word attribute, Part-of- Speech (POS) information attribute, and a list of lexical and semantic attributes. Universal word lookup phase In this phase, Bangla-UW dictionary is used for mapping of Bangla root word of each node to Universal VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6545Words and to retrieve its lexical semantic information. Exact UW is extracted from the dictionary on the basis of node’s Bangla root word attribute and its grammatical category. Since, Bangla-UW dictionary may contain more than one entry for a given Bangla word, the searching process retrieves the UW that matches with the node’s Bangla word and its grammatical category. For example, Bangla word, েখল khel ‘play’ has two entries in Bangla-UW dictionary, one as a noun and other as a verb. It selects only that entry which matches with the grammatical category of the node given by the Bangla parser. If a node is marked as unknown in the first phase of parsing, then node’s Bangla word attribute is searched in dictionary with its grammatical category as ‘null’. In case of multiple entries of that word, the system returns the UW of first entry and thus the unknown word becomes known during this phase. After extracting the UW, the node’s UW attribute is updated and linked list of lexical and semantic attributes is extended to append the UW dictionary attributes with the attributes generated by the parser. Case marker lookup phase If Bangla root word attribute of a node is not found in the Bangla-UW dictionary, then it may be a case marker or function word of the language having no corresponding UW. In such a case, node’s Bangla word attribute is searched in the case marker lookup file. If a word is found then the information about the case marker is added in the linked list of lexical and semantic attributes of the node and its UW is set to ‘null’ (because a case marker has no corresponding UW). This information plays an important role in resolving UNL relations in UNL generation phase. Unknown word handling phase If an unknown word is resolved in Universal Word lookup phase, then corresponding node</s>
<s>is updated with its UW and dictionary attributes. Otherwise, these are resolved in the Case marker lookup phase. If some words still remain unknown, these words are processed in Unknown word handling phase. In this phase, system searches an unknown word in unknown word handling file. It contains only those Bangla words that are derived from some root words because all other unknown words are resolved by UW lookup phase or Case marker lookup phase. For example, in case of unknown Bangla word, যােব jabe ‘will go’ having root word যা ja ‘go’ has েব bae as a modifier. This modifier contains tense, number and gender information about the sentence. It plays an important role in the generation of UNL attributes. Thus, a new node is inserted in the linked list for this modifier as Bangla root word attribute and its UW attribute, POS attribute and linked list of lexical semantic attributes are all set to ‘null’. As such, in case of unknown word যােব jabe ‘will go’, node’s Bangla word attribute is set to যা ja ‘go’ and a new node is inserted into the linked list with its Bangla word attribute as েব bae. If a node is updated by unknown word handling phase, it is again processed in Universal Word lookup phase for getting its UW otherwise the token remains to be unknown word. Figure-2. Flowchart of Bangla EnConverter. Algorithm for UNL relation resolution Bangla EnConverter System invokes the following algorithm for UNL relation resolution and generation of attributes. i) Process each node of linked list by considering the first node as left analysis window and next node as right analysis window. ii) Search the required rule from EnConverter analysis rules. This depends upon the dictionary attributes of left and right analysis windows. iii) Modify the linked list to resolve the UNL relations and generate UNL attributes according to the fired rule. If no rule is fired, then go to step (v). iv) Consider first node of modified linked list as left analysis window and next node as right analysis window. Go to step (v) With new analysis windows. If the modified linked list contain only single node, then consider that node as “entry node” and stop further processing. It means that all the nodes are successfully processed by the system. vi) If no rule is fired in step (ii), then shift the window to right. This effectively means that right analysis node will become the left analysis window and next node will VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6546become the right analysis window. Go to step (ii) with new analysis windows. Enconversion of a Bangla sentence to UNL “ivRv Zvnvi cy·K gyKzU w`‡e” pronounce as ‘Raja tahar putroke mukut dibe’ means “The king will give crown to his son” Dictionary Entry: [ivRv]{} “king (icl>sovereign>thing,ant>queen)” (N,3P) [Zvnv]{} “he(icl>person)”(PRON,HPRON,3P) [i]{} “” (INF, INF6TH,OBJ, POS ) [cyÎ]{} “son(icl>male_offspring>thing,ant>daughter)” (N) [‡K]{} “”</s>
<s>(INF, INF2ND, NOM, OBJ, BEN) [gyKzU]{} “crown(icl>jewelled_headdress>thing)”(N) [w`]{}“give(icl>do,equ>hand_over,agt>thing,obj>thing,ben>person)” (ROOT,VEND,VEG1,#AGT,#OBJ) [‡e]{} “” (VI,VER,3P,FUT) Morphological Rules: First, second and third morphological analyses to be held between “w`” (di) & “‡e” (be), “cyΔ (putro) & “‡K” (ke) and “Zvnv” (taha) & “i” (r) to complete the meaning of the words “w`‡e” (dibe), “cy·K” (putroke) and “Zvnvi” (tahar) respectively using the following morphological rules: Rule1:+{ROOT,VEND,^ALT,^VERB:+VERB,-ROOT,+@::} {VI,VEND:@future::}P10; Rule2: +{N:@::}{INF, 2NDINF, NOM:::}P10; Rule3: +{PRON,HPRON:@::}{INF,6THINF,NOM:::}P10; Semantic Rules: First semantic relation is object (obj) relation which is made between “gyKzU” (crown) “w`‡e” (will give) using the following rule, >{N::obj:}{VERB,#OBJ:+&@future::}P10; Second semantic relation is made between “cy·K” (to son), which is beneficiary and “w`‡e” (will give) is dative case. Rule for dative case to perform semantic analysis is: >{N,BEN::ben:}{VERB:+&@future::} Third semantic relation is possessive (pos) relation to be held between “Zvnvi” (his) and “cy·K” (to son) using the following rule: >{PRON,HPRON,OBJ::pos:)}{N:::} Forth semantic relation is agent (agt) relation to be held between “ivR v” (king) and “w`‡e” (give) using the following rule: >{N,SUBJ::agt:)}{VERB:+&@future,&@entry::} Experimental result and testing system We have tested our system on several Bangla sentences. It has been seen that the system successfully handles the resolution of UNL relations and generation of attributes for these sentences. The system has been tested with the help of English sentences available at Russian UNL language server. We have manually translated the given English sentences at Russian language server into equivalent Bangla sentences and then inputted those equivalent Bangla sentences to the designed Bangla-UNL EnConverter system. We have compared the UNL expressions generated by our system with the UNL expressions generated by Russian UNL language server. This comparative analysis is given in Table-1 for five sentences. When more sentences are tested and rules will be added then accuracy will be increased. Table-1. A comparative analysis of UNL expressions generated by Bangla EnConverter and Russian UNL language server. S. No. Input Bangla sentence Relations resolved Rules fired UNL expressions generated by the Bangla EnConverter UNL expressions generated by the Russian UNL 1 আপিন কখন জােবন? Agt, tim R {SHEAD:::} {HPRON,SUBJ:::} P1; DR{HPRON,SUBJ,^blk:blk::} {BLK:::} P10; R {SHEAD:::} {HPRON,SUBJ:::} P1; R {HPRON,SUBJ:::} {QPRON:::} P1; DR {QPRON,^blk:blk::} {BLK:::} P10; R {HPRON,SUBJ:::} {QPRON:::} P1; R {QPRON:::} {ROOT,VEND:::} P1; +{ROOT,VEND,^VERB:+VERB,-ROOT,+@::} {KBIV,FUT:::} P8; :{:::} {VERB,KBIV:-KBIV,-VEND,+3P::} P10; R {QPRON:::} {VERB:::} P1; + {VERB:::} {QMARK:::} P8; > {QPRON::tim:} {VERB,#TIM:::} P8; > {HPRON,SUBJ::agt:} {VERB,#AGT:::} P8; R{SHEAD:::}{VERB,^&@entry,^&@future,^&@interrogative:+&@entry +&@future +&@interrogative::agt:01(leave(icl>refrain>do,obj>thing,agt>thing):0C.@entry.@future.@interrogative,you(icl>person):00tim:01(leave(icl>refrain>do,obj>thing,agt>thing):0C.@entry.@future.@interrogative,when(icl>how>time):05) tim(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@imperative.@interrogative,will(icl>legal_document>thing,pos>person)) mod(go(icl>move>do,plt>place,plf>place,agt>thing).@entry.@imperative.@interrogative,u-initial) VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 65472 ে ন আজ যােব না। agt tim R {SHEAD:::} {N:::} P1; DR {N,^blk:blk::} {BLK:::} P10; R {SHEAD:::} {N:::} P1; R {N:::} {N:::} P1; DR {N,^blk:blk::} {BLK:::} P10; R {N:::} {N:::} P1; R {TODAY,N:::} {DEPART,ROOT,^VERB:::} P1; +{DEPART,ROOT,VEND,^ALT,^VERB:+VERB,-ROOT,+@::} {KBIV,FUT:::} P8; : {:::} {VERB,KBIV:-KBIV,-VEND::} P10; > {TODAY,N::tim:} {VERB,#TIM:::} P8; >{TRAIN,N:&@def:agt:} {VERB,#AGT:::} P8; R {SHEAD:::} {VERB:::} P1; DR {VERB,^blk:blk::} {BLK:::} P10; R {SHEAD:::} {VERB:::} P1; DR{VERB,^&@entry,^&@future,^&@not:+&@entry +&@future +&@not::} {NOT:::} P10;agt(depart(icl>exit>do,equ>go,plt>thing,plf>thing,agt>thing):0A.@entry.@future.@not,train(icl>public_transport>thing):00.@def) tim(depart(icl>exit>do,equ>go,plt>thing,plf>thing,agt>thing):0A.@entry.@future.@not,today(icl>how>time,equ>nowadays):06) agt(depart(icl>exit>do,equ>go,plt>thing,plf>thing,agt>thing).@entry.@not.@future,train(icl>public_transport>thing)) tim(depart(icl>exit>do,equ>go,plt>thing,plf>thing,agt>thing).@entry.@not.@future,today(icl>how,equ>nowadays)) 3 সময় চেল যােc। obj R {SHEAD:::} {N:::} P1; DR {N,^blk:blk::} {BLK:::} P10; R {SHEAD:::} {N:::} P1;</s>
<s>R{TIME,N:::} PASS,ROOT,CEND,,^VERB:::} P1; +{PASS,ROOT,CEND,^VERB:+VERB,-ROOT,+@::} {KBIV,PRS,PRG,3P:::} P8; :{:::}{PASS,VERB,KBIV,^&@present,^&@progress:-KBIV,-CEND,&@present,&@progress::} P10; > {OBJ::obj:} {VERB,#OBJ:::} P8; R {SHEAD:::} {VERB,^&@entry:+&@entry::} P1; R {VERB:::} {STAIL:::} P1; obj(pass_by(icl>travel>occur,equ>travel_by,cob>thing,obj>thing):07.@entry.@present.@progress,time(icl>abstract_thing,equ>occasion):00) mod(pass(icl>accomplishment>thing,equ>base_on_balls).@entry,time(icl>abstract_thing,equ>occasion)) mod(pass(icl>accomplishment>thing,equ>base_on_balls).@entry,away(icl>adj,equ>away)) CONCLUSIONS In this paper, the structural constitution of Bangla EnConverter has been proposed and implemented. Bangla EnConverter uses the EnConversion analysis rules for UNL relation by resolution and generation of attributes derived from the input of Bangla ambiguous sentences. At this juncture, we have developed approximately five hundred EnConversion analytical rules. They were necessary for the development of proliferating Bangla EnConverter. This EnConverter has been tested and examined thoroughly for its performance in a public domain hosted by the Russian Language Server. The test results were encouraging as the system output in analogous with the Russian Language Server. At present, the Bangla EnConverter can process nothing further than simple sentence input. We are working on extending the scopes of the EnConverter to include clausal, interrogative and long sentences. Moreover, the effectual implementation of the ambiguity problem module in the proposed Bangla EnConverter is also being worked on till date. REFERENCES [1] Uchida H. and Zhu M. 1993. Interlingua for Multilingual Machine Translation CENTER OF THE INTERNATIONAL COOPERATION. MT Summit IV. pp. 157–169. , Kobe, Japan. [2] Dey K. and Bhattacharyya P. 2005. Universal Networking Language based analysis and generation of Bengali case structure constructs. Research on Computing Science. Vol. 12, pp. 215–229. [3] Uchida H. and Zhu M. 2001. The universal networking language beyond machine translation. International Symposium on Language in Cyberspace. pp. 1–15. , Seoul, Republic of Korea. [4] Kumar D.C.S. 1999. Bhasha-Prakash Bangala Vyakaran. Rupa and Company Prokashoni, Calcutta. [5] Hong M. and Streiter O. 1999. Overcoming the language barriers in the Web: The UNL-Approach. In Multilingual Corpora : encoding, structuring, analysis. 11th Annual Meeting of the German Society for Computational Linguistics and Language Technologiesing, Germany. [6] Universal Networking Language (UNL) Specifications Version 2005, http://www.undl.org/unlsys/unl/unl2005/. [7] Uchida H., Zhu M. and Senta T. Della. 1999. A gift for a Millennium. , Tokyo, Japan. [8] Dave S., Parikh J. and Bhattacharyya P. 2001. Interlingua-based English-Hindi Machine Translation and Language Divergence. Machine Translation. Vol. 16, pp. 251–304. [9] Ali N.Y., Das J.K., Al-Mamun S.M.A. and Nurannabi A.M. 2008. Morphological analysis of bangla words for universal networking language. 3rd International Conference on Digital Information Management, ICDIM 2008. pp. 532–537. VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6548[10] Dhanabalan T., Saravanan K. and Geetha T.V. 2002. Tamil to UNL EnConverter. International Conference onUniversal Knowledge and Language. , Goa, India. [11] Uchida H. 1987. ATLAS: Fujitsu Machine Translation System. Machine Translation Summit. , Hakone, Japan. [12] Jain M. and Damani O.P. 2009. English to UNL (Interlingua) Enconversion. Second Conference on Language and Technology, (CLT). , Lahore, Pakistan. [13] EnConverter Specification Version 3.3, Tokyo, Japan (2002). [14] H. A. Bakkotottyo, Dhaka, Bangladesh (1994). [15] Mridha M.F., Huda M.N., Rahman C.M. and Das J.K. 2010. Development of morphological rules for Bangia root, verbal suffix and</s>
<s>primary suffix for universal networking language. ICECE 2010, pp. 570–573, Dhaka, Bangladesh. [16] Mridha M.F., Saha A.K. and Das J.K. 2014. New Approach of Solving Semantic Ambiguity Problem of Bangla Root Words Using Universal Networking Language (UNL). International Conference on Informatics, Electronics & Vision (ICIEV). pp. 1 – 6. IEEE, Dhaka, Bangladesh. [17] Saha A.K., Mridha M.F. and Das J.K. 2014. Analysis of Bangla Root Word for Universal Networking Language (UNL). International Journal of Computer Applications. Vol. 89, pp. 8–12. [18] Saha A.K., Mridha M.F., Akhtar S. and Das J.K. 2013. Attribute Analysis for Bangla Words for Universal Networking Language (UNL). International Journal of Advanced Computer Science and Applications (IJACSA). Vol. 4, pp. 158–163. [19] Mridha M.F., Rahman M.S., Huda M.N. and Rahman C.M. 2010. Structure of dictionary entries of Bangla morphemes for morphological rule generation for universal networking language. 2010 International Conference on Computer Information Systems and Industrial Management Applications, CISIM. pp. 454–459, Krackow, Germany. [20] Mridha M.F., Banik M., Ali M.N.Y., Mohammad Huda N., Rahman C.M. and Das J.K. 2010. Formation of Bangla Word Dictionary Compatible with UNL Structure. 4th International Conference on Software, Knowledge and Information Management and Applications. , Paro, Bhutan. View publication statsView publication statshttps://www.researchgate.net/publication/283882615</s>
<s>A Phrase-Based Machine Translation from English to Bangla Using Rule-Based ApproachA Phrase-Based Machine Translation from English to Bangla Using Rule-Based Approach*a,bAfsana Parveen Mukta, a,bAl-Amin Mamun, aChaity Basak, a,bShamsun Nahar, a,bMd. Faizul Huq Arif aDepartment of Computer Science and Engineering (CSE), World University of Bangladesh (WUB), Bangladesh bResearcher, SenSyss, Bangladesh *Email address: afsanacse1206@gmail.com Abstract—In this paper, a model of transfer architecture has been proposed which represents a Rule-Based Approach. This approach relies on the fuzzy rules. It is a tense and phrase based English to Bangla transfer system. This article represents a knowledge-based technique with a set of data. A rough set technique is used in knowledge representation system for language translation. This technique is used to categorize each English sentence to a particular group using attributes and organized in a pattern. The pattern arranges according to the rules then the system will produce the target sentence Bangla. The whole procedure completes with 6 steps: 1) Collect data 2) Tokenized by word 3) Arrange according to rules 4) Morphological Analyze 5) Reconstruct Bangla sentence using appropriate rule 6) Target sentence. Comparing the experimental result with Google translator, it has been found that the model translation system provides higher accuracy then comparing translator. Keywords— Machine Translation, Natural language Processing, Verb Phrase, Noun Phrase, Language Translation. I. INTRODUCTION Bangla is a member of Indo-Aryan languages [1], which has come from Sanskrit. Bangla is the state language of Bangladesh where it is spoken as a first language by most of the people. In India, it is permitted provincial language in West Bengal, Tripura and Assam states. Natural Language Processing is an automatic manipulating system of natural language. People started work on NLP 50 years ago. From the beginning, the programmers found NLP as a complex system. Though there is great scope for research but a limited number of researches have been done on this field. Machine translation system is a part of artificial intelligence. There are so many approaches which are used for Machine Translation (MT); and rule based approach one of them. Rule based approach is a technique which has been developed first in the field of Machine Translation. This approach is mainly a collection of grammar rules and works in various stages of translation. Parse tree is also used for sentence structure in this approach. Moreover, this paper shows an optimal way by using rule-based Machine Translation (MT) approach from English to Bangla translation to give a better translating system with high accuracy rate. On exploring this paper it is found that some of them used Cockey-Younger-Kasami algorithm for translation where the translation process has occurred through parse tree [2]. Muntarina K. et al analyzed all tenses but got 100% success on present indefinite, past indefinite and future indefinite tense [3]. They worked on propositions which are rare concepts [4]. Morphological analysis is implemented with a large number of affixes [5]. Roman characters are used by phonetic mapping [6]. Authors in the article [7] tried to develop a new approach for English</s>
<s>to Bangla translation. Francisca J.et al used IF-Then rules for English to Bangla translation [8]. S.A Rahman implements a new NLP Algorithm [9]. Authors in the article [10] proposed a case structure analysis for verb. Here an experiment runs on machine translation system [11]. II. MACHINE TRANSLATION Machine translation (MT) is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another [12]. There are various kinds of machine translation approaches. i) Statistical Machine Translation (SMT) ii) Interlingua approach iii) Corpus-Based approach iv) Example-Based machine translation v) Hybrid Machine Translation approach vi) Rule-Based Approach. A. Statical Machine Translation (SMT) SMT models take the view that every sentence in the target language (TL) is a translation of source language (SL) sentence with some probability. SMT systems also presume language and translation models from very large quantities of monolingual and bilingual data using a range of theoretical approaches to probability distribution and estimation [13]. The best translation of the sentence is that which has the highest probability. In SMT there are three major components: language model, a translation model, search algorithm. If target language (a) and source language (b) then we can write, = ∗ ( )( ) Where ( ) depends on the ( ) which is the probability of the kinds of the sentence that are likely to be in the language a. This is known as the language model ( ). The way sentences in get converted to the sentences is called translation model . 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 7-9 February, 2019978-1-5386-9111-3/19/$31.00 ©2019 IEEEFig.1. SMT Architecture B. Interlingua Approach Interlingua machine translation is the most advanced system. The source text is analyzed in a representation from which the target text is directly generated in Interlingua method. The intermediate representation includes all information necessary for the generation of the target text without ‘locking back' to the original text [14]. The Interlingua language creates before creating Interlingua approach. This language shares all the features and makes all the distinctions of all languages. In Interlingua approach, an analyzer is used to put the source language into the Interlingua and convert the Interlingua into the target language using a generator. Interlingua Approach follows two stages: 1. Extracting the meaning of a source language sentence in a language-independent form. 2. Generating a target language sentence from the meaning Fig.2:.Interlingua Approach C. Corpus-Based Approach In corpus-based MT (CBMT) approach two parallel corpora are available in the source language (SL) and target language (TL) where sentences are aligned. First, it is done by matching fragments against the parallel corpus and then adopting the method to the TL. Finally reassembling these translated fragments appropriately and then translation principle is applied [15]. Fig. 3 shows an example. Corpus-based Approach has three steps: 1. Matching fragments against the parallel training corpora. 2. Adapting the matched fragments to the target language 3. Recombine these translated fragments appropriately. Fig.3. Corpus-Based Approach D. Example-Based Machine Translation</s>
<s>Example-Based translation is based on recalling similar examples of the language pairs. In 1981 Makoto Nagao first proposed this concept of "Translation by Analogy". The system of Example-Based translation approach can give a set of sentences in the source language. And then using a point to point mapping it translates each sentence in the target language. Memory-Based machine translation is another name of Example-Based Machine Translation. The advantage of an Example-Based system it can train translation program and decodes more quickly. Moreover, this approach works with a small set of data even it will be one sentence pair. E. Hybrid Machine Translation Approach Rule-Based translation methodology and statistical translation methodology makes together Hybrid machine translation approach. The hybrid approach to machine translation tries to take the advantages of both frameworks by using available resources. Stat-XFER is such a hybrid machine translation framework, developed to specifically suit machine translation between morphologically-rich and resource-poor language pairs, in this framework, external tools can be provided and used during the process of translation. These include: a. A Bilingual lexicon, possibly with probabilities per word-pair. b. A Morphological analyzer of the source language c. A Morphological disambiguate for the source language d. A Morphological generator of the target language. F. Rule-Based Approach Rule-Based Machine Translation (RBMT) system is formed with a collection of rules. These rules are grammar rules. These rules are made by using a bilingual dictionary and good linguistic knowledge. These rules are processed by the lexicon and software programs. Based on the Chomsky hierarchy inform of computational grammar rules. Basically, these rules consist of an analysis of the source language and generation of the target language in terms of grammar structures. Lexicon provides a dictionary for lookup of words during translation while the software program allows effective and efficient interaction of the components. The approach depends heavily on language theory hence resource intensive in terms human labor and hours spend when building the rules but easy to maintain, easy to extend to other languages and can deal with varieties of linguistic phenomena. III. IMPLEMENTED METHOD This paper is basically focused on English grammar, noun phrase, verb phrase, Bangla grammar Bivokti (Inflection) and also focused on the prepositional phrase. Applying these rules in experiment implements different types of sentence. Like twelve types of tense, three types of phrase, affirmative and also negative sentence. This model input these types of sentence. Then tokenize these sentence by word. For the next step, according to fuzzy rule Bangla translation get from the library. A morphological analyzer analyzing these words for the target sentence. After analyzing Bangla word reconstruct according to the appropriate rules. Finally, the target Bangla sentence is found. Fig.4. Proposed model for translating English Bangla output sentence A. Analyzing Grammer for English to Bangla Language English is a very rich language and English grammar is a large area. Analysis of whole area is a huge task. Analysis has been done for only those things which are indispensable for this language translation, like preposition, noun,</s>
<s>verb, and phrase. Proposition and verb are the biggest part of English grammar and also it is indispensable for English grammar. Without a verb, a sentence cannot be identified by the system. A subsequent verb and an auxiliary carries out different meaning in different sentences. In this language translation, it excludes auxiliary verb and joining with the subsequent verb which makes a verb phrase. On the other hand, in Bangla sentence auxiliary verb is not used directly, for an example: "He is playing" here auxiliary verb "is" and the subsequent verb "playing" It will be considered a verb phrase like "is playing". If the auxiliary verb translates into Bangla then it means "হয়" but it is not used in Bangla sentence. That is why the dictionary is developed in this way. In the same way subsequent verb and preposition make prepositional phrase, for example: “I agree with him” here subsequent verb “agree” translate in Bangla as “রাজী” in prepositional phrase “agree with” in Bangla “রাজী হoয়া” In the same way preposition and object make together noun phrase Example: “I am playing in the field” here object “field” in Bangla “মাঠ” but in noun phrase “to field” mean “মােঠ”. Here “e” used as Bivokti. In Bangla grammar, preposition is not used directly. Prepositions are used as Bivokti. If one or two letter used as the suffix after the noun and to make a relation with the other words then it called Bivokti. At the same time, Bivokti is not used in English sentence, different phrase and preposition translate as Bivokti in Bangla sentence. For example: “I am singing at home” in Bangla “আিম বািড়েত গান গা ” here "at" is translated as “ ত” and added after object “home” as (বািড় + ত = বািড়েত). Here the preposition is "at". In Bangla grammar, the prepositions "at" translate as Bivokti. The dictionary is built to handle enough auxiliary verb and preposition. B. English and Bangla Language Structure Language translation from English to Bangla needs a comparative structure between these two languages. It will be helpful for understanding the major problems of language translation. At first, the sentence structure describes these two languages. With an Example the analysis has been done for the structure, “Subject + Verb + Object”. Example – She plays carom (She + plays + carom). Bangla structure: Subject + Object + Verb. Example– {( স করম খেল) ( স + করম + খেল)}. The main limitation with two language translation is imbalanced sentence structure. And then analyzing grammar to produce a rule for language translation. Because both languages have different grammatical rules. Describe these rules with parse tree with figure 5, 6, and 7. Fig. 5. Parse tree for English sentence Fig. 6. Parse tree after generating grammatical rule5 Fig. 7. Parse tree for Bangla sentence For proper language translation, it is necessary to compare English and Bangla language structure. While making the structure from English to Bangla grammar a morphological analysis is needed. Another challenge of these two</s>
<s>languages is to synchronize between Bivokti and preposition. Bivokti use in Bangla grammar and preposition used in English grammar. Bivokti is not existence in English grammar on the other hand preposition is not existence in Bangla grammar. By making verb phrase, noun phrase and prepositional phrase solve this problem. Preposition and noun make together noun phrase, auxiliary verb, and subsequent verb make together make verb phrase, Preposition and subsequent verb make together a prepositional phrase. C. Sentence comparing with the implemented method and the Google Translator Comparing with Implemented method and Google Translator. Some difference are showed here a English sentence “I wish peace in the country” accurate Bangla sentence is “আিম দেশ শাি কামনা কির” implemented method gives “আিম দেশ শাি কামনা কির” and Google Translator gives “আিম দেশর শাি চান”. Another English sentence is “I am sleeping in my room” accurate Bangla sentence is “আিম আমার েম ঘুমা ” implemented method gives“আিম আমার েম ঘুমা ” and Google Translator gives “আিম আমার েম ঘুমাে ”. Here English sentence is “I am ashamed of his conduct” accurate Bangla sentence is “আিম তার আচরেণর জন ল ত” implemented method gives “আিম তার আচরেণর জন ল ত” Google Translator gives “আিম তার আচরেণর জন ল ত”. In these three sentences Google Translator gives two incorrect output and implemented method gives three correct output. In this way comparing some sentences with implemented method and Google Translator. It shown that implemented method gives high accuracy more than Google Translator. IV. EXPERIMENTAL RESULT The proposed method finds the accuracy rate compares between two files: one is the original file and the other is implemented output file. First, the program counts sentence and word number from the original file. Then compare sentence by sentence and word by word. If it finds any word mismatch then counts word mismatch and if it finds any sentence mismatch then count sentence mismatch. Finally, the proposed method counts sentence exactness and word exactness rate. = ( ) ∗ 100% = ( ) ∗ 100% Here, SE= Sentence Exactness rate TS= Total Sentence MS= Mismatch Sentence WE= Word Exactness rate TW= Total Word MW= Mismatch Word In above equation total 1113 sentences and 5967 words are applied on Rule-Based Approach. Implemented method and Google translate find different accuracy rates. Fig. 8. Exactness rate with compare to Google Translator By analyzing the Fig. 8, we can see that the sentence accuracy rate and word accuracy rate of Rule-Based approach is higher than Google translator. The rule-based approach shows the higher accuracy rate. 98.8151.4395.5155.62100120Rulebased GoogleTranslatorWord Exactness Rate (%)Sentence Exactness Rate%Fig. 9. Word and Sentence count with compare to Google translator In Fig. 9, there are 1113 sentences and 5967 words are applied. In the Rule-Based approach found 50 mismatch sentence from 1113 sentences and 71 mismatch words from 5967 words. As opposed to Google translator provide 494 mismatch sentence from 1112 sentences and 2898 mismatch words from 5967 words. The rule-based approach shows the lowest number of sentences and words mismatch. After</s>
<s>seeing all the evidence it proved that the rule-based approach shows more accuracy than Google translator. A. Comparison with Related Works By massive analyzing it found that many types of research had been done by using other approaches but a little work had been done in the Rule-Based Approach from English to Bangla translation. Comparing with others research this paper will provide a high accuracy rate and rich dictionary. This paper also gives a better result on the verb phrase, noun phrase, and preposition phrase. B. Why is the Rule-Based Approach is Best? In the rule-based system, grammatical rules are used. Rule-Based Approach can deal with a huge amount of data which is almost difficult with other approaches. It is a trained system that is why it can make decisions much faster without wasting time in calculating the result. In addition, more rules will not make any difficulty for the system. This system creates all possible meaning. The rule-based system picks only the effective one and the bangle meaning change according to the grammar rules. And also can give the proper result on verb phase and noun phase and also preposition phase. V. CONCLUSION There are so many methods for Machine Translation but this paper was implemented a Rule-Based Machine Translation system which can translate from English Sentences to Bangla sentences using some sentences pattern. This paper focused on twelve tenses, verb phrase, noun phrase, preposition phrase, affirmative and negative sentences. The implemented method gives the sentence accuracy rate 77.6% and word accuracy rate of 80.88%. This Method is also compared with Google Translator. Rule-Based Machine Translation System is able to use the wisdom of source language, destination language, and grammatical rules. That is why the Rule-Based approach gives the best result comparing to other approaches. The rest two types of phrase and also idioms will be implemented in the future. This system also works on multiple languages. REFERENCES [1] “Bengali language Britannica.com.” [Online]. Available: https://www.britannica.com/topic/Bengali-language. [Accessed: 31-Jul-2018]. [2] S. Dasgupta, A. Wasif, and S. Azam, “An optimal way of machine translation from English to Bengali,” in Proc. 7th International Conference on Computer and Information (ICCIT), 2004, pp. 648–653. [3] K. Muntarina, M. G. Moazzam, and M. A.-A. Bhuiyan, “Tense Based English to Bangla Translation Using MT System,” International Journal of Engineering Science Invention, 2013. [4] S. K. Naskar and S. Bandyopadhyay, “Handling of prepositions in English to Bengali machine translation,” in Proceedings of the Third ACL-SIGSEM Workshop on Prepositions, 2006, pp. 89–94. [5] N. K. Zaman, M. A. Razzaque, and A. A. Talukder, “Morphological Analysis for English to Bangla Machine Aided Translation,” in National Conference on Computer Processing of Bangla, Dhaka, Bangladesh, 2004. [6] N. UzZaman, A. Zaheen, and M. Khan, “A comprensive roman (english)-to-bangla transliteration scheme,” 2006. [7] S. Ahmed, M. O. Rahman, S. R. Pir, M. A. Mottalib, and Md. S. Islam. 2003, “A New Approach towards the Development of English to Bengali Machine Translation System”, International Conference on Computer Information and Technology (ICCIT). [8] J. Francisca,</s>
<s>Md Mamun Mia, Dr. S. M. Monzurur Rahman. 2011, “Adapting Rule B sed Machine Translation from English to Bangla”, Indian Journal of Computer Science and Engineering (IJCSE). [9] S. A. Rahman, K. S. Mahmud, B. Roy, and K. M. A. Hasan. 2003, “English to Bengali Translation Using A New Natural Language Processing Algorithm,” in International Conference on Computer Information and Technology (ICCIT). [10] M. K. Rhaman and N. Tarannum, “A rule based approach for implementation of bangla to english translation,” in Advanced Computer Science Applications and Technologies (ACSAT), 2012 International Conference on, 2012, pp. 13–18. [11] M. M. Asaduzzaman and M. M. Ali, Transfer Machine Translation, “An Experience with Bangla English Machine Translation System”, In the Proceedings of the International Conference on Computer and Information Technology 2003. [12] “Science or Fiction: Machine Translation Explained | Blog | Ciklopea.” [Online]. Available: https://ciklopea.com/translation/translation-technology/science-or-fiction-machine-translation-explained/. [Accessed: 03-Oct-2018]. [13] A.Way and N. Gough, “Comparing example-based and statistical machine translation,” Natural Language Engineering, vol. 11, no. 3, pp. 295–309, 2005. [14] W. J. Hutchins and H. L. Somers, An introduction to machine translation, vol. 362. Academic Press London, 1992. [15] S. Nahar, M. N. Huda, M. Nur-E-Arefin, and M. M. Rahman, “Evaluation of machine translation approaches to translate English to Bengali,” in Computer and Information Technology (ICCIT), 2017 20th International Conference of, 2017, pp. 1–5. 1113 11135967 596750 4947128982000400060008000Rule Based GoogleTranslatorSentence(Total)Word(Total)Sentence Mismatch Word (Mismatch) /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT</s>
<s>/CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact</s>
<s>/ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic</s>
<s>/WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Preparation of Papers in Two-Column Format for the Proceedings of the 2004 Sarnoff SymposiumBangla to English Machine Translation using Fuzzy Logic Md. Musfique Anwar Computer Science and Engineering Department, Jahangirnagar University, Bangladesh Email: manwar@juniv.edu Abstract- Transfer in machine translation (MT) plays an important role for producing correct output. This paper presents a technique to address about structural and lexical mappings from different types sentences of Bangla language for machine translation. Machine translation requires analysis, transfer and generation steps to produce target language output from a source language input. This paper deals with the syntactic transfer and generation for Bangla simple, complex and compound sentences into English. Structural representation of Bangla sentences encodes the information of Bangla sentences and a transfer module has been designed that can generate the English sentences from a corpus based automatic Bangla machine translator using Fuzzy logic. The effectiveness of this method has been justified over the demonstration of different Bangla sentences and the success rates in all cases are over 90%. Keywords- Machine Translation, Structural representation, Fuzzy Logic, Corpus. 1. INTRODUCTION Machine translation (MT) refers the translation from one natural (source) language to another (target language). It is an important area of Natural Language Processing (NLP). MT is a challenging job due to building up a successful translator for producing exact target language output from a source language. At a minimum, transfer s ystems require monolingual modules to analyze and generate sentences, and transfer modules to relate equivalent translation representations of those sentences. To interpret language we need to determine a sentence structure. To do this we must know the ru les of how language is organized and have an algorithm to analyze language given on those rules. Parsing serves in language to combine the meanings of words and phrases. A grammar captures the legal structure in a language and thus allows a sentence to be analyzed. Parsing a sentence then involves finding a possible legal structure for sentence. The result is usually a tree (referred to as parse tree) or structural representation (SR) [1]. Analysis and generation are two major phases of machine translation . There are two main techniques concerned in analysis phase. These are: Morphological Analysis Morphological analysis is the determination of the grammatical categories (noun, verb, adjective, adverb, etc) of the words of sentences. That means, it incorporates the rules by which the words are analyzed. To give an English example, the words analyzes, analyzed and analyzing might all be recognized as having the same stem analyze and the common endings –s, -ed, -ing. The result of morphological analysis then is a representation that consists of both the information provided by the dictionary and the information contributed by the affixes. Morphological information of words are stored together with syntactic and semantic information of the words. Syntactic Analysis Syntactic Analysis involves the inclusion of a few rearrangement rules in the basic word by word approach such as the inversion of ‘noun-adjective’ to ‘adjective-noun’. Rearrangement rules may take into account fairly long sequences</s>
<s>of grammatical categories, but they do not imply any analysis of syntactic structure like the identification of a noun phrase. Complete syntactic analysis involves the identification of relationships among phrases and clauses within sentences. Syntactic analysis aims to identify three basic types of information about sentence structure: 1) The sequence of grammatical elements, e.g. sequences of word classes: article + verb + preposition ………, or of functional elements: subject + predicate. These are linear (or precedence) relations. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018156 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 2) The grouping of grammatical elements, e.g. nominal phrases consisting of nouns, articles, adjectives and other modifiers, prepositional phrases consisting of prepositions and nominal phrases etc . up-to the sentence level. These are constituency relations. 3) The recognition of dependency relations, e.g. the head noun determines the form of its dependent adjectives in inflected languages such as Bangla, German and Russian etc. These are hierarchical (or dominance) relations. Actually each sentence is composed of one or more phrases. So if we can identify the syntactic constituents of sentences, it will be easier for us to obtain the structural representation of the sentence [2]. This paper implements a technique to perform syntactic analysis of Bangla sentences using grammatical rule-bases approach that accept almost all types of Bangla sentences. All rule-bases, namely inflection rule-bases, preposition mapping rule-bases and conjuncts mapping rule-base are designed to be declarative, rather than procedural which enables updating rule in a simple and easy manner [2]. A formal language is a set of words, i.e. finite strings of letters, symbols, or tokens. The set from which these letters are taken is called the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar (also called its formation rules); accordingly, words that belong to a formal language are sometimes called well-formed words (or well-formed formulas). Natural languages such as Bangla, English, Chinese, have no strict definition but are used by a community of speakers [3]. 2. PHRASES Most grammar rule formalisms are based on the idea of phrase structure – that strings are composed of substrings called phrases, which come in different categories. For example, the phrases “the cow”, “the king”, “the agent in the corner”, are all examples of the category noun phrase or NP [3]. A sentence must have a subject phrase and a predicate phrase. The subject is the part, which names the person or thing we are speaking about. And the predicate is the part, which tells something about the subject. Phrases form the building blocks for the syntactic structure of a sentence. In English, commonly used phrases are Noun phrase, Adjective phrase, Adverbial phrase and Prepositional phrase. Within the early standard transformational models it is assumed that basic phrase markers are generated by phrase structure rules (PS rules) of the following sort [4]: S → NP AUX VP NP → ART N VP → V NP Each rule is essentially</s>
<s>a formula, or specification, called the production rules used of grammars by the parser to parse sentences. For example, the PS rules given above tell us that an S (sentence) can consist of, or can expanded as, the sequence NP (noun phrase) AUX (auxiliary verb) VP (verb phrase). The rules also indicate that NP can be expanded as ART N and that VP can be expressed as V NP. There are three types of phrases in Bangla- Noun phrase, Adjective Phrase and Verb Phrase. Simple sentences are composed of these phrases. Complex and compound sentences are composed of simple sentences [5]. 2.1 Analysis Sentence can be analyzed into three main parts: (i) Syntactic interpretation (or Parsing), (ii) Semantic interpretation and (iii) Pragmatic interpretation. Parsing is the process of building a parse tree for an input string [2][8][9]. The interior nodes of the parse tree represent phrases and the leaf nodes represent words. Semantic interpretation is the process of extracting the meaning of a sentence as an expression in some language representation. In this analysis phase, certain checks are made to ensure that the discrete input components fit together meaningfully. Pragmatic interpretation takes into account the fact that the same words can have different meaning in different situations [2] [5]. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018157 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 3. BANGLA SENTENCES STRUCTURE A simple sentence is formed by an independent clause or principal clause. Example: evjKwU Pv cvb K‡i Now, a simple sentence can have 1) Subject (D‡Ïk¨) and 2) Predicate (we‡aq) parts. Again subject are of two types a. Simple Subject (mij D‡Ïk¨), b. Expanded Subject (m¤úªmvwiZ D‡Ïk¨). And predicate are also of two types: a. Simple Predicate (mij we‡aq), b. Expanded Predicate (m¤úªmvwiZ we‡aq). In Bangla language, complex sentence consists of one or more subordinate clause within a principle clause [2]. As for example, ami Jakhon Dhaka gelam takhon se asustha chilo (Avwg hLb XvKv †Mjvg ZLb †m Amy¯’ wQj) From the above examples, we see that the prepositions are placed in free order but fixed order in English. Here in the first example, the first preposition in the preposition pair is used after noun whereas the one is used before the pronoun. In the second example, both the prepositions are placed before the pronouns respectively. Th e following prepositions (Ae¨q) are generally used to connect the principle clause with the subordinate clause. After analyzing the Bangla complex sentences, the following syntactic structure can be established: S → Conj* + DC + Conj + IC, where, S-sentence, Conjconjunctive word, DC-dependent clause, IC-independent clause. When two or more independent clauses are connected by preposition (Ae¨q) and thus construct a single sentence, then the sentence is said to be compound sentence [3]. As for example, se kal asbe ebong ami Jabo (‡m Kvj Avm‡e Ges Avwg hve) The following prepositions (Ae¨q) are generally used to connect the independent clauses: o (I), ebong (Ges), noile (bB‡j), fale (d‡j) etc. The syntactic structure</s>
<s>for compound sentence can be established as : sentence → clause + preposition (Ae¨q) + sentence|clause 3.1 Structural Transfer Structural transfer for compound-complex sentences has two levels. Firstly, we have to take various clauses with respect to conjunctive word if available. Secondly we perform structu ral transformation of every clause. Every conjunctive word has expectations in terms of clauses and an associated rule for structural transfer. Conjunctive word mapping rule-base contains expectations and structural transfer rule for every conjunctive word [6]. Example includes, “and” expects two clauses and structural transfer rule for “and” sentences would be clause1 + and (Ges) + clause2 [2]. It is seen that there are two simple sentences in the primitive complex sentence. One is the principle simple se ntence and the other is the subordinate simple sentence. To translate the complex Bangla sentence, it is necessary to separate these two simple sentences. To do this, first a given complex sentence is scanned and then searches to know which type of subordinate and/or subordinate complement is in the sentence. For example, a complex sentence “jadi tumi paro tahole pas koriba (hw` Zzwg co Zvn‡j Zzwg cvm Kwi‡e)”. Here, Principal simple sentence: tumi pas koriba (Zzwg cvm Kwi‡e) Subordinate simple sentence: tumi paro (Zzwg co) Subordinator: jadi (hw`) Subordinator complement: tahole (Zvn‡j) In complex sentences, above subordinators are usually come before the subordinate simple sentence and subordinator complements are added in the various positions in the principal simple sentence [4]. 4. BANGLA STRUCTURE ANALYSIS 4.1 Proposed Model The model proposed model for structural analysis of Bangla sentences is shown in Fig. 1. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018158 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 Fig. 1 Block diagram of Bangla parser 4.2 Description of the Proposed Model For parsing, we take a Bangla natural sentence as input. In next phase the stream of characters are sequentially scanned and grouped into tokens according to lexicon. The words having a collective meaning are grouped together in a lexicon. The output of the Tokenizer of the input sentence “wbev©Pb n‡e Ges MYZš� cÖwZwôZ n‡e” is as follows [1] [5]: Token = (“wbev©Pb”, “n‡e”, “Ges”, “MYZš�”, “cÖwZwôZ”, “n‡e”). The parser is the most important tool of this phase. To ensure its validity within the underlying gramma r, every sentence must be checked by the parser. The parser involves grouping of tokens into grammatical phrases that are later used to synthesize the output. Usually, the phrases are represented by a parse tree that depicts the syntactic structure of the input. The most common way to represent grammar is as a set of production rules which says how the parts of speech can put together to make grammatical, or “well- formed” sentences . Lexicon is a list of allowable words. The words are grouped into the categories or parts of speech. A lexicon can also be defined as a dictionary of words where each word contains some syntactic, semantic, and possibly some pragmatic information. This is the</s>
<s>largest components of an MT system in terms of the amount of information they hold. The information in the lexicon is needed to help determine the function and meanings of the words in a sentence. Each entry in a lexicon will contain a root word called head. The entries in a lexicon could be grouped and given by word category (by specifier, nouns, verbs, and so on), and all words contained within the lexicon listed within the categories to which they belong [1] [3] [4] [5]. Fig. 2 illustrates a sample lexicon for Bangla parsing. A corpus is a large and structured set of texts. They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules on a specific universe. This is the basic training corpus used to train the alignment template Language Model. Fig. 2 Sample lexicon for Bangla parsing. Structural Representation is a process of finding a parse tree for a given input string. That is, a call to the parsing function PARSE, such as PARSE (“The dog is dead”) should return a parse tree with root S whose leaves are “The dog is dead” and whose internal nodes are non- terminal symbols [3]. In linear text, we write the tree as : [S : [NP : [Article : the] [Noun : dog]] [VP : [Verb : is] [Adjective : dead]]] Fig. 3 shows the parse tree for this sentence. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018159 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 Fig. 3 Parse tree for the sentence “The dog is dead” For conversion, we applied Fuzzy logic for the interpretation of the input Bangla sentence to English sentence as output; where fuzzy membership values are calculated with the help of probability distribution of the words in corpus [7]. This paper focuses on the formation and use of the grammar rules to be used by the parser in the syntax analysis phase. Structurally, there are three types of Bangla sentences: a. Simple Sentence (mij evK¨), b. Complex Sentence (RwUj evK¨), c. Compound Sentence (‡hŠwMK evK¨). 4.3 Basic Rules to Parse a Sentence* 1. Sentence → Simple sentence | Complex sentence | Compound sentence; 2. Simple sentence → Principle clause; 3. Complex sentence → Subordinate part + Additive word + Principal clause | Principal clause + Additive word + Subordinate part; 4. Subordinate part → Subordinate clause | Subordinate clause + Additive word + Subordinate part; 5. Subordinate clause → Additive word + Principal clause; 6. Additive word → Indeclinable | Null; 7. Compound sentence → Principal clause + Additive word + Compound part; 8. Compound part → Principal clause | Compound sentence; 9. Principal clause → Subject + Predicate; 10. Subject → Simple subject | Expanded subject; 11. Predicate → Simple predicate | Expanded predicate; 12. Simple Subject → Actor (KZ©„c`); 13. Actor → Noun + Inflection | Pronoun + Inflection | Implicit (Dn¨) Actor; 14. Pronoun → Person ; 15. Person → FP | SP</s>
<s>| TP ; ( Example -- FP - aami , aamraa ) 16. SP → SPH | SPNH | SPP ; ( Example -- SPH- aapni , aapnaara ; SPNH - tumi , tomraa ; SPP - tui , toraa ) 17. TP → TPH | TPNH ; ( Example -- TPH - tini , taaraa ; TPNH - shey , taaraa ) 18. Implicit Actor → Null; 19. Expanded Subject → Sub-expander + Subject; 20. Sub-expander → Adjective | Adjective + Infinite verb | Adjective clause | Relative part (m¤^Ü c`/c`mgwó) | Relative part + Adjective | Adverbial clause; 21. Relative part → Noun + Gi (er) | Pronoun + Gi | Adjective + Gi , 22. Simple predicate → Verb clause | Implicit verb; 23. Implicit verb → Null; 24. Expanded predicate → Pre-expander + Verb clause; 25. Pre-expander → Adverb | Adverb + Adverb | Adverb + Object (Kg©c`) | Adjective + Object | Adjective expander (we‡kl‡Yi we‡klY) + Adjective + Object | Object | Adverbial clause; 26. Object → Noun | Pronoun | Relative part + Noun | Relative part + Pronoun | Null; 27. Verb clause → Infinite verb + Finite verb | Finite verb | Implicit verb | Infinite verb + Finite verb + Indeclinable | Finite verb + Indeclinable(Ae¨q c`); 28. Indeclinable → bv (na) | Other; * The ‘→’ sign means the phrase “can have the form of”, the ‘|’ sign indicates an alternative rule for the left -side term and the ‘+’ sign means join of two terms of a sentence . International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018160 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 Fig. 4 Parse tree for a Compound Bangla sentence 5. IMPLEMENTATION OF PROPOSED MODEL 5.1 Generation of Structural Representation Flow-chart of structural representation (SR) generation by means of parsing approach are given bellow: Fig. 5 Flow-chart for finding the respective parts of speech of Bangla token 5.2 Implementation of Language Modeling International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018161 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 To implement the language model, a bilingual corpus of large amount of aligned sentence pairs is used as training corpora. This corpus contains an English sentence and a Bangla sentence for each aligned pair. The translation model uses both the Bangla and English sentences to estimate the translation probability which is considered as fuzzy membership value of each Bangla word. Two steps for this purpose are defined as, Fig. 6 Flow-chart for calculating the First word of a sentence Yes Yes Yes Yes Yes Start Input Bangla Sentence Find translation of each word and make esentence. Set prob=0, p=1 Has more words in esentence? Read a word from esentence in eword. Set count=0, countfirst=0, firstword=eword Read a line from corpus in cline. Set k=1 File end? Is p%=0? cline end? Take a word from cline in cword Is eword=cword? count ++ Is K=1? Read a line from corpus in cline. Set k=1</s>
<s>Stop P ++ k ++ countfirst ++ Prob=probability, firstword=eword Is probability > prob? Probability = countfirst/count Yes Yes International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018162 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 Calculating the fuzzy membership value of each word to come first of the English sentence To calculate this probability the number of occurrence of each word is calculated as “count” and the number of occurrence of that word in the first position of an English sentence is also calculated as “countfirst” from the bilingual corpus. Then “countfirst” is simply divided by “count” to get the part icular probability of that word. And we assign this probability as fuzzy membership value for that word. Flow-chart for calculating the First word of a sentence is shown in Fig. 6. Calculating the fuzzy membership value of each other words to come next of the English sentence For calculating the probabilities of each other words to come next following the current word, a combination is formed with each other words. Then the number of occurrence of this combination is calculated as “countcombination” and the number of occurrence of the current word is also calculated as “countindividual”. Now “countcombination” is simply divided by “countindividual” to get the probability of every word. And this probability is assigned as fuzzy membership value for the combination. If no words is found the current word is retained. 6. EXPERIMENTAL RESULTS AND DISCUSSIONS In order to justify the effectiveness of this method, several experiments were conducted. Fig. 7 illustrates the snapshot of the implemented method. Success rate for different types of sentences is shown in Fig. 8. Fig. 7 Sample output of the program for the complex sentence International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018163 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 1002003004005006007008009001000Total no. of sentences Correctly performedsyntax analysis &machine translationSuccess rate (%)SimpleComplexCompoundFig. 8 Success rate for different types of sentences 7. CONCLUSION This paper mainly focuses on the syntax analysis phase, it sets aside the job of extracting additional information about the other parts of speech such as the noun, the pronoun, the adjective, the adverb and the indeclinable which will be of great use in the semantic analysis phase. The concepts of the change of voice, narration and other special concepts of Bangla grammar such as the composition of words (mgvm) and inflection of the noun or pronoun (bvg-wefw³) should be further analyzed. As the major emphasis was given to parse the finite verb of the sentence, o ther types of clauses like the adverbial or adjective clauses were not parsed further. In this paper, we have discussed how to identify the principal simple sentence and the subordinate simple sentence in a Bangla complex sentence and to separate the primitive complex sentence. We have also shown how to generate the translated English complex sentences. We know that, Bangla grammar has an inherent property in forming the verbs, that is, unlike the English grammar, various necessary information of a</s>
<s>sentence such as the tense, the person, the mode of verb (ক্রিয়ার fve) etc. can be extracted from a finite verb. Many previous works did so by decomposing the verb phrase. The inflection of verb (ক্রিয়া-wefw³) plays a very important role in this regard; further investigation can be done in decomposing the verb and then extracting the information. Few earlier works have proposed parsing method for different forms of Bangla present tense. We can extend those set in future by proposing methods for all other types. The inflection of Bangla verb (ক্রিয়া-wefw³) can have different forms depending on the tense, the person and the class of subject of the verb. REFERENCES [1] M. M. Hoque and M. M. Ali, “ A Parsing Methodology for Bangla Natural Language Sentences”, Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 277-282 (2003). [2] K. D. Islam, M. Billah, R. Hasan and M. M. Asaduzzaman, “ Syntactic Transfer and Generation of Complex-Compound Sentences for Bangla-English Machine Translation”, Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 321-326 (2003). [3] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 2 nd Edition, Pearson Education publisher, New York, 2003. [4] S. K. Chakravarty, K. Hasan, A. Alim, “A Machine Translation (MT) Approach to Translate Bangla Complex Sentences into English” Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 342 -346 (2003). [5] L. Mehedy, N. Arifin and M. Kaykobad, “Bangla Syntax Analysis: A Comprehensive Approach”, Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 287 -293 (2003). [6] D. Rao, P. Bhattacharya and R. Mamidi, “Natural Language Generation for English to Hindi Human-Aided Machine Translation”, Proceedings of International Conference on Knowledge Based Computer Systems, (Mumbai, India), pp. 171 -189 (1998). [7] M. G. Uddin, M. Murshed, M. A. Hasan, “A parametric approach to Bangla to English Statistical Machine Translation for complex Bangla sentences -Step 1”, Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 529-534 (2005). [8] M. M. Anwar, M. Z. Anwar, M. A. Bhuiyan, “Syntax Analysis and Machine Translation of Bangla Sentences”, IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.8, August 2009 , pp. 317-326. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018164 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 [9] S. S. Ashrafi, M. H. Kabir, M. M. Anwar, A. K. M. Noman, “English to Bangla Machine Translation System Using Context-Free Grammars”, IJCSI International Journal of Computer Science Issues, Vol. 10, Issue 3, No 2, May 2013 , pp. 144-153. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 11, November 2018165 https://sites.google.com/site/ijcsis/ ISSN 1947-5500</s>
<s>Syntax Analysis and Machine Translation of Bangla SentencesSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/340886414English Translation of Bangla Simple Sentences Using Bilingual CorpusConference Paper · February 2011CITATIONSREADS2 authors:Some of the authors of this publication are also working on these related projects:Community Detection View projectFace Recognition Using Eigenface View projectMd Musfique AnwarSwinburne University of Technology21 PUBLICATIONS 22 CITATIONS SEE PROFILEMd. Al-Amin BhuiyanKing Faisal University72 PUBLICATIONS 595 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Musfique Anwar on 24 April 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/340886414_English_Translation_of_Bangla_Simple_Sentences_Using_Bilingual_Corpus?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/340886414_English_Translation_of_Bangla_Simple_Sentences_Using_Bilingual_Corpus?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Community-Detection-12?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Face-Recognition-Using-Eigenface?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Anwar3?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Anwar3?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Swinburne_University_of_Technology?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Anwar3?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Bhuiyan8?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Bhuiyan8?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/King_Faisal_University?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Bhuiyan8?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Anwar3?enrichId=rgreq-fb266675c0b49a82079d6096aa0df262-XXX&enrichSource=Y292ZXJQYWdlOzM0MDg4NjQxNDtBUzo4ODM3Njg2ODg2NTIyODhAMTU4NzcxODI4MzA5Ng%3D%3D&el=1_x_10&_esc=publicationCoverPdfSecond International Conference on Computational Intelligence Applications 2011English Translation of Bangla Simple Sentences Using Bilingual CorpusMd. Musfique Anwar, Nasrin Sultana Shume and Md. Al-Amin BhuiyanDept. of Computer Science & Engineering, Jahangirnagar University, Dhaka, BangladeshEmail: musfique.anwar@gmail.com, shume_sultana@yahoo.com, alamin_bhuiyan@yahoo.comAbstractTransfer in machine translation (MT) plays an important role for producing correct output. This paper presents a technique to analyze and implement a corpus based automatic Bangla machine translator. The study is based on a bilingual corpus of Bangla and English texts and translation unit alignment. A bilingual dictionary contains the translation probability of English word. Our proposed MT system can be extendable to paragraph translation.Keywords:Machine Translation, Bilingual Corpus, Bilingual Dictionary, Translation Probability etc.1. Introduction Machine translation (MT) refers the translation from one natural (source) language to another (target language). It is an important area of Natural Language Processing (NLP). MT is a challenging job due to building up a successful translator for producing exact target language output from a source language. At a minimum, transfer systems require monolingual modules to analyze and generate sentences, and transfer modules to relate equivalent translation representations of those sentences [1]. We use statistical approach for machine translation. The Statistical Machine Translation (SMT) constructs a general model of the translation relation, and then let the system acquire specific rules automatically from the bilingual and monolingual text corpora [2].Many factors make transfer system of MT an attractive issue [3]. These are:• Many systems are bilingual, or their principal use for translation in one direction between a limited numbers of languages.• Where full multilinguality is required it is possible to have a hub language into and out of which translation is done.• Portions of transfer modules can be shared when closely related languages are involved.2. Statistical Machine Translation (SMT)The Statistical approach is the use of statistics in computational linguistics. The most established SMT system is based on word for word substitution although some experimental SMT systems employ syntactic processing.Statistical approaches to MT means:• Approaches which does not use explicitly formulated linguistic knowledge to perform MT or,• The application of statistical techniques on calculating probability to aid parts of the MT task (example word sense disambiguation).The idea behind SMT approach is to let a computer learn automatically how to translate text from one language to another by examining large amounts of parallel bilingual text, i.e. documents which are nearly exact translation of each other. The Statistical MT approach uses statistical data to perform translation. This statistical data is obtained</s>
<s>from an analysis of a vast amount of bilingual texts. Different probabilities are extracted from the bilingual texts automatically by a computer and these are:i) The probability of a source sentence to occur in the texts.ii) The probabilities of a source word to be translated as one, two, three etc. target words.iii) The translation probabilities of each word in each language, andiv) The probabilities of the position of each word in the source language sentence which is not in the same position of the target language word in the target sentence (i.e. the probability of distortion).These probabilities are vital to the translation process as these are the sole information for calculating how the source language sentence should be translated to the target language form.2.1 Basic ProbabilitiesLet us consider that an English sentence e may translate into any Bangla sentence b. The basic probabilities are given below:Priori probability, P (e): The probability that e happens. For example, if e is the English string “I eat rice”, then P (e) is the probability that a certain person at a certain time will say, “I eat rice” as opposed to saying something else.Conditional probability, P (b | e): The probability of b given e. For example, if e is the English string “The boy drinks tea” and if b is the Bangla string 505mailto:alamin_bhuiyan@yahoo.commailto:musfique.anwar@gmail.comSecond International Conference on Computational Intelligence Applications 2011“ ”, then P (b | e) is the probability that upon seeing e, a translator will produce b. Joint probability, P (e , b): The probability of e and b both happening. If e and b do not influence each other, then we write P (e , b) = P (e) * P (b). For example, if e stands for “the first roll of the die comes up 5” and b stands for “the second roll of the die comes up 3”, then P (e , b) = P (e) * P (b) = (1/6) * (1/6) = 1/36. If e and b do influence each other, then we had better write P (e , b) = P (e) * P (b | e). That means the probability that “e happens” times the probability that “if e happens then b happens”. If e and b are strings that are mutual translations, then there is definitely some influence.2.2 Translation processEven though text alignment is not really a [art of the actual translation process, it serves, as a necessary tool for creating dictionaries and grammars, thus improving the quality of MT [4]. Text alignment is the first step to make bilingual corpora useful. Word alignment means a group of sentences in one language that corresponds in content to some group of sentences in the other language, where either group can be empty so as to allow insertions and deletions.A corpus is a large and structured set of texts. They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules on a specific universe [5]. This is the basic training</s>
<s>corpus used to train the alignment template Language Model. Aligning a corpus means making each translation unit of the source corpus correspond to an equivalent unit of the target corpus. In this case, the term "translation unit" covers both larger sequences such as chapters or paragraphs and shorter sequences such as sentences, syntagms or simply words [6].The language model provides us with probabilities for string of words. We need to build a machine that assigns a probability P (e) to each English sentence e. It concerns with the probabilities of the occurrence of a word with its neighboring words to form a string of words. The language model use a bi-gram model which takes into account every two neighboring words for calculating P (e). In order to calculate the source language probabilities, a large amount of monolingual data is required, since of course the validity, usefulness or accuracy of the model will depend mainly on the size of the corpus.3. Practical Implementation3.1 Bilingual corpusThe bilingual corpus contains both an English sentence and a Bangla sentence for each aligned pair. This is the basic training corpus used to train the alignment template language model. The translation model uses both the English and Bangla sentences to estimate the translation probability of each Bangla word. The language model uses only the English sentences to set the grammatical structure of the expected English sentence as output. Fig. 1 shows a sample bilingual corpus.Fig. 1 Sample Bilingual Corpus3.2 Bilingual DictionaryDictionaries are the largest component of an MT system in terms of information it holds. In a bilingual dictionary various types of entries are possible. Normally “paper dictionaries” are collection of entries. That is, they are basically lists of words, with information about various properties of the word. But in this model, bilingual dictionary contains only two information of each Bangla word. According to the alignment in the bilingual corpus each Bangla word contains the translation probability of the connected English word. The formation of this bilingual dictionary is as,Which represents that the translation probability of“I” is 0.93456243 and 0.068763 to be the translation of and respectively and so on.3.3 Training ProcedureTraining the bilingual corpus by means of translation modeling is a matter of inducing the translation probability table. For given English word e, we pretend that all Bangla words connected to each English word are equally likely translations. For a given sentence pair, all alignments will therefore look equally likely as well. Also for language modeling each English sentence is traversed to form the 515Second International Conference on Computational Intelligence Applications 2011sentence linguistically. A Bangla sentence is taken as input from the user and for each Bangla word the highest translation probability is taken as the translation of that Bangla word.4. Experimental Result For implementing the language model, bi-gram model is considered. First the exact translation of each Bangla word is chosen from the bilingual dictionary and initially an English sentence is made. Then the probability of each English word is calculated to come first</s>
<s>of the English sentence by training the bilingual corpus. The exact word is chosen from among these probabilities for which the probability is highest. Then from the rest of the words the probability is calculated to come next and so on. And again form these probabilities the next word of the highest probability is chosen. In this way, the final English sentence is constructed which follows the training corpus. Fig. 2 illustrates the snapshot of the implemented method.Fig. 2 Sample output of the implemented MT model of English translation of Bangla sentences5. ConclusionFrom a methodological point of view, combining a linguistic approach with a statistical approach makes it possible to fine-tune the alignment and enhance processing of bilingual corpora with a view to machine translation. The primary goal is to develop an MT system for English-Bangla integrating proper linguistic analysis and syntactic transfer into a data-driven approach. This paper focuses on the improvement of translation quality and the adaptability of the system to the user’s requirements. We have tried to set up an appropriate model to adapt SMT system of English translation of Bangla simple sentences. The model can be extended to perform machine translation of Bangla complex and compound sentences to English in future.References [1] A. Trujillo, “Translation Engines: Techniques for Machine Translation”, Springer-Verlag, London, (1992).[2] K. Knight, “Automatic Knowledge Acquisition for Machine Translation”, AI Magazine 18(4), (1997).[3] M. M. Asaduzzaman and M. M. Ali, “Transfer Machine Translation – An Experience with Bangla English Machine Translation System”, Proceedings of International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, pp. 265-270 (2003).[4] T. Watanabe and E. Sumita, “Example-based Decoding for Statistical Machine Translation”, ATR Spoken language, Translation research Laboratories 2-2-2, Keihanna Science City Kyoto 619-0288 Japan, (2001).[5] M. M. Anwar, M. Z. Anwar and M. A. Bhuiyan, “Structural Analysis of Bangla Sentences for Machine Translation”, Proceedings of International Conference On Computational Intelligence Applications, India, pp. 230–237 (2010).[6] M. Guidère, “Toward Corpus-Based Machine Translation for Standard Arabic”, Translation Journal, Vol. 6 No. 1, (2002).525Second International Conference on Computational Intelligence Applications 2011Md. Musfique Anwar completed his B.Sc (Engg.) in Computer Science and Engineering from Dept. of CSE, Jahangirnagar University, Bangladesh in 2006. He is now a Lecturer inthe Dept. of CSE, Jahangirnagar University, Savar, Dhaka, Bangladesh. His research interests include Natural Language Processing, Artificial Intelligence, Image Processing, Pattern Recognition, Software Engineering and so on.Nasrin Sultana Shume completed her B.Sc (Engg.) in Computer Science and Engineering from Dept. of CSE, Jahangirnagar University, Bangladesh in 2006. She is now a Lecturerin the Dept. of CSE, Green University of Bangladesh, Mirpur, Dhaka, Bangladesh.Her research interests include Artificial Intelligence, Neural Networks, Image Processing, Pattern Recognition, Database and so on.Md. Al-Amin Bhuiyan received his B.Sc (Hons) and M.Sc. in Applied Physics and Electronics from University of Dhaka, Dhaka, Bangladesh in 1987 and 1988, respectively. He got the Dr. Eng. Degree in Electrical Engineering from Osaka City University, Japan, in 2001. He has completed his Postdoctoral in the Intelligent Systems from National Informatics Institute, Japan. He is</s>
<s>now a Professor in the Dept. of CSE, Jahangirnagar University, Savar, Dhaka, Bangladesh. His main research interests include Image Face Recognition, Cognitive Science, Image Processing, Computer Graphics, Pattern Recognition, Neural Networks, Human-machine Interface, Artificial Intelligence, Robotics and so on.535View publication statsView publication statshttps://www.researchgate.net/publication/340886414</s>
<s>Abstract—Case structure plays a vital role in grammatical structure of any language during language translation. This paper presents an in-depth analysis of Bangla locative case constructs based on UNL (Universal Networking Language) machine translation scheme. A set of analysis rules have been defined to convert various Bangla locative case sentences into UNL expressions that can later be converted to any native language using language independent deconversion rules. We have demonstrated five different analysis rules and illustrated how each of them can effectively convert Bangla sentences to UNL expressions Index Terms—Universal networking language (UNL), universal words (UWs), Bangla-UNL dictionary, morphological analysis, EnConverter (EnCo). I. INTRODUCTION UNL is a digital meta-language for describing, summarizing, refining and disseminating information in a machine independent and human languae neutral form, which represents information in the form of semantic networks with hypergraph. The hypergraph has formal English text realization as English is known to experts. It consists of Universal Words (UWs), UNL Relations and UNL attributes. An UW expresses the English equivalent meaning of the word along with some constraints lists and is to be used in creating UNL expression of output. UNL Relations are the building blocks of UNL expressions. The existence of UNL expressions relation is found in between two UWs of sentence. Relation between the words is drawn from a set of predefined relations. The UNL attributes are attached with UWs to provide additional information like tense, numbers etc. to complete the UNL expressions [1]. A set of analysis rules is to be used to generate UNL expressions from Bangla locative case sentences. The EnConverter [1], [2] is a language independent parse that provides synchronously a framework for morphological, syntactic and semantic analyses. EnConverter operates on the nodes of the Node-list through its windows. EnConverter analyses a sentence using the Word Dictionary, and enconversion Rules. It retrieves relevant dictionary entries from the word dictionary, operates on nodes in the Node-list by applying enconversion rules, and generates semantic networks of UNL by consulting the Knowledge Base. It generates UNL expressions from sentences of a native language using enconversion rules by finding the most Manuscript received August 8, 2013; revised December 12, 2013. Nawab Y. Ali and Ameer A. are with East West University, Dhaka, Bangladesh (e-mail: nawab@ewubd.edu, dmaa730@gmail.com). Golam S. is with Southern Cross University, Australia (e-mail: golam.sorwar@scu.edu.au). suitable rules for the respective sentences. Among the various types of analysis rules described in [2], left & right composition and left & right modification rules play important roles in conversion processes. Composition rules combine the two headwords of the left and right nodes into a composite node to perform the morphological analyses and modification rules create the syntactic trees and the semantic relations of the two nodes on the analysis windows to perform semantic analyses of the sentences. For example, the UNL expression and the UNL graph of the sentence We live in Bangladesh is shown in Fig 1. {unl} aoj(live(icl>be,aoj>person).@entry.@present,we(icl>group).@pl) plc(live(icl>be,aoj>person)@entry.@present,bangladesh(iof>asian_country>thing)) {/unl} Fig. 1. UNL expressions and UNL graph. In Fig. 1, aoj is</s>
<s>the UNL relation which indicates "thing with attribute"; plc is another UNL relation which indicates "a place where an event occurs"; @entry and @ present are UNL attributes which indicate the main verb and tense information; and attribute @pl indicates number information. This organization of this paper is determined as following: The literature, which is related to the UNL structure and format of dictionary is illustrated in Section II. Analysis of different Bangla locative case sentences based on UNL structure and development of analysis rules are demonstrated in Section III. In Section IV we have illustrated the step by step conversion procedures of a complete locative case sentence using some analysis rules, and the conclusion of the paper has been drawn along with several concluding remarks in Section V. II. LITERATURE REVIEW Generation of Hindi from Universal Networking Language UNL-Based Machine Translation Scheme for Bangla Locative Case Constructs Nawab Y. Ali, Golam S., and Ameer A. live(icl>be,aoj>person).@entry.@present bangladesh(iof>asian_country>thing) we(icl>group).@pl aoj plc International Journal of Information and Education Technology, Vol. 4, No. 5, October 2014454DOI: 10.7763/IJIET.2014.V4.449has been analyzed by Dwivedi [3]. UNL based MT system for Hindi language and Hindi generation rules for Hindi Enconverter have been analyzed and created by Giri [4], Dave [5]. The analysis of Tamil morphology for the development of Tamil Enconverter has been performed by Dhanabalan [6]. Arabic grammar generator has been proposed for the advancement of Arabic MT system based on UNL by Adly and Alansary [7]. Morphological analyses of Bangla simple and compound words for MT have been discussed in [8], [9]. Similar approaches have also been observed in languages like Frence, Spanish, Chinese, English, Russian and German [10]. Apparently, numerous research works on morphological analysis of Bangla words for UNL, conversion of Bangla sentence into UNL expressions, algorithms for conversion of Bangla sentence to UNL have been found or in progress for the last few years [11], [12]. III. ANALYSIS OF BANGLA LOCATIVE CATIVE CASE SENTENCES IN CONTEXT OF UNL In Bangla the locative case is formed differently depending on the ending of the word [12], [13]. For examples, ঢাকা (dhaka) + য় (ye) = ঢাকায় (dhakaye) meaning in English 'in Dhaka', লন্ডন (london) + এ (e) =লন্ডনন (londone) meaning in English 'in London' etc. We have analyzed five categories of locative case sentences and developed analysis rules for those classes of sentences to convert them into UNL expressions. A. Case in Place To identify the conditions for case in place we define i) attribute #PLC with verb roots such as ‘বস’ (bosh), ‘থাক’ (thak) etc. that can form verbs, ii) attribute #PLC with the name of the places e.g. name of the river, name of the country etc. and finally, iii) case inflexions ’এ’ (e) or ’য়’ (ye) or ‘তে’(te) are included with the nouns/noun phases and an attribute 7TH must be added with nouns or noun phases. Say, consider the following three sentences, ‘তস ঢাকায় থানক’, pronounce as, Shey dhakaye thake, meaning, 'He lives in Dhaka', ‘আমি মসঙ্গাপুনে থামক’, pronounce as</s>
<s>Aami singapore thaki, in English, 'I live in Singapore' and ‘নদীনে িাছ আনছ’ pronounce as Nodite machh ache meaning 'Fishes are in the river'. In the first sentence, noun ‘ঢাকা’ (dhaka) is a vowel ended word, where ‘ঢাকা’ is combined with case inflexion ‘য়’ (ye) to make ‘ঢাকায়’ (dhakaye) meaning in Dhaka that reflects case in place and produces UNL relation plc with verb ‘থানক’ (thake). And in the second sentence, noun ‘মসঙ্গাপুে’ (Singapore) is a consonant ended word where ‘মসঙ্গাপুে’ is combined with case inflexion ‘এ’ (e) to make ‘মসঙ্গাপুনে’ meaning in Singapore that also reflects case in place and produces UNL relation plc with verb ‘থামক’ (thaki). Whereas in the third sentence, noun ‘নদী’ (nodi) meaning river is a vowel ended word, where ‘নদী’ is combined with ‘তে’ (te) to make ‘নদীনে’ (nodite) meaning in the river that also reflects case in place and produces UNL relation plc with verb ‘আনছ’ (achhe). So, if the place is vowel ended case inflexions ‘য়’(ye) & ‘তে’ (te) and if the place is consonant ended, the case inflexion ‘এ’(e) are used to make case in place for plc relation. Attributes VEND and CEND are used with all kinds of vowel ended and consonant ended places in all locative cases respectively. Analysis rules for converting the sentences for case in place are as follows: Rule for morphological analysis: +{N,VEND/CEND,#PLC,7TH,^anus,^krok:@::}{INF,KROK,7TH, VEND/CEND, #PLC:::} Rule for semantic analysis: >{N/NP,#PLC,inf,krok,7th::plc:}{V,#PLC:::} Morphological rule is to be used to complete the morphological analysis between noun and case inflexion and semantic rule is to be used to perform semantic analysis between noun/noun phase and verb by making plc relation to convert the sentence into UNL expression where, 'INF' denotes attribute for inflexion, 'KROK' for case inflexion, N' for noun, 'NP' for noun phase, and temporary attributes 'inf' and 'krok' are to be used to prevent recursive operations. B. Case to Place In case to place we use movement related verb roots such as ‘যা’(ja), ‘আস’(ash),‘ঘুর্ ’(ghur), ‘মির্ ’(fir), ‘তদৌড়’ (dour) etc. To identify the conditions for case to place we define i) attribute #PLT with verb roots that can form verbs, ii) attribute #PLT with the name of the places and finally iii) case inflexion 0 (zero) is added with nouns/noun phases. For example, consider the following two sentences, ‘আমি লন্ডন যাব’ pronounce as Aami london jabo, meaning “I will go to London” and ‘োহাো বামড় যানব’ pronounce as Tahara bari jabe, meaning ‘They go to home’. In the first sentence, noun ‘লন্ডন’ london is a consonant ended word where ‘লন্ডন’ is combined with case inflexion 0 (zero) to make ‘লন্ডন’ that reflects case to place and produces UNL relation plt with verb jabo. And in the second sentence, noun ‘বামড়’ (home) is a vowel ended word where ‘বামড়’ is combined with case inflexion '0' (zero) to make ‘বামড়’ that also reflects case to place and produces UNL relation plt with verb jabe. As iflexion '0' is added in both instances to make noun phases no morphological analysis is</s>
<s>needed. Attributes VEND and CEND are used with vowel ended and consonant ended nouns respectively. Analyses rules for converting the sentences for case in place are as follows: Rule for morphological analysis: No analysis rule is required Rule for semantic analysis: >{N/NP,#PLT::plt:}{V,#PLT:::} This semantic rule performs semantic analysis between noun/noun phase and verb by making plt relation to convert the place to related locative case sentences into UNL expressions. C. Case in Time In order to identify the conditions for case in time we define i) attribute #TIM with verb roots that can form verbs, ii) International Journal of Information and Education Technology, Vol. 4, No. 5, October 2014455attribute #TIM with the nouns related to times e.g. morning, evening, day, night, 6 o clock etc. and finally iii) case inflexions ‘এ’ (e) or ‘য়’ (ye) or ‘তে’ (te) must be included with the nouns/noun phases and an attribute 7TH must be added with nouns or noun phases. Consider a sentence, say, ‘আমি প্রেযহ সকানল রুটি খাই’ pronounce as aami prottoho shokale ruti khai meaning ‘I eat bread every morning’. In this sentence, noun সকাল (shokal) meaning morning is a consonant ended time where ‘সকাল’ is combined with case inflexion ‘এ’ (e) to make ‘সকানল’ (shokale) meaning in the morning that reflects case in time and produces UNL relation tim with verb ‘খাই’ (eat). Analysis rules for converting the sentences for case in time are as follows: Rule for morphological analysis: +{N,VEND/CEND,#TIM,7TH,^anus,^krok:@::}{INF,KROK,7TH, VEND/CEND, #TIM:::} Rule for semantic analysis: >{N/NP,#TIM,VEVD/CEND,inf,krok,7th::tim:}{V,#TIM:::} D. Case to Time In order to identify the conditions for case to time we define i) attribute #TMT with verb roots that can form verbs, ii) attribute #TMT with the nouns related to times e.g. morning, evening, day, night, 6 o clock etc. and finally iii) case inflexions ‘পযযন্ত’ (porjonto) or ‘অবমি’ (obodhi) is to be included with the nouns/noun phases. Consider the following two sentences, say, ‘আমি প্রেযহ মবকাল পযযন্ত অমিনস থামক’ pronounce as Aami prottoho bikal porjonto ofishe thaki meaning ‘I stay in my office till every afternoon’ and ‘তস আজ োে ৯টা অবমি বাসায় থাকনব’ pronounce as Se aaj rat noi ta obodhi bashae thakbe, in English He will stay at home till 9 pm. In the first sentence, case inflexion ‘পযযন্ত’ (porjonto) meaning till is placed after ‘মবকাল’(bikal) meaning afternoon to make noun phase ‘মবকাল পযযন্ত’ (bikal porjonto) meaning till afternoon and in the second sentence case inflexion ‘অবমি’ (obodhi) meaning till is placed after ৯টা (noi-ta) meaning 9 pm to make noun phase ‘৯টা অবমি’ (noi-ta obodhi) meaning till 9 pm. Both of them reflect case to time and produce UNL relation tmf with verb ‘থামক’ (thaki) and ‘থাকনব’ (thakbe) respectively. Analyses rules for converting the sentences for case to time are as follows: Rule for morphological analysis: +{N,VEND/CEND,#TMT,^anus,^krok:@::}{INF,KROK,7TH, VEND/CEND, #TMT:::} Rule for semantic analysis: >{N/NP,#TMT,VEVD/CEND,inf,krok,7th::tim:} {V,#TMT:::} IV. CONVERSION OF A BANGLA LOCATIVE CASE SENTENCE TO UNL EXPRESSIONS This section describes the conversion procedures and the experimental results of the following locative case sentence into UNL</s>
<s>expressions. Bangla sentence: “আমি জানুয়ােী হইনে জনু পযযন্ত ঢাকায় থাকনবা” English pronunciation: “Aami january hoite june porjonto dhakaye thakbo” Equivalent English Sentence: I will stay in Dhaka from January to June. The chunks obtained from the input sentence are given below: (<<)(আমি)( ) (জানয়ুােী) ( ) (হইনে) ( ) (জনু) ( ) (পযযন্ত)( ) (ঢাকা)(য়) ( )(থাক)(তবা)(>>) There are nine nodes in the given sentence shown in Table I. We have used an EnConverter [14] tool for our experiment. The tool takes as its input a dictionary file (Table I), a set of analysis rules (Table II). These analysis rules are to be applied to the nodes in the node list thorough the windows of the Enconverter. Enconverter inputs the string of sentence and initially the sentence will be placed in the right analysis windows (RAW). Then in scans the string of the sentence from left to right and all matched morphemes with the same string characters are retrieved from the word dictionary and become the candidate morphemes. The rules are applied to these candidate morphemes according to a rule priority to build the syntactic tree and the semantic network of UNL for the sentence [2], [14], [15]. This semantic network for UNL can later be converted into a variety of native languages using Deconverter [16] by language specific generation rules. The nodes of the dictionary are processed by the EnConverter using the dictionary entries and analysis rules. TABLE I: HEAD WORDS UNIVERSAL WORDS AND THE GRAMMATICAL ATTRIBUTES OF THE NODES IN THE INPUT SENTENCE Nodes Head Words Universal Words Attributes Node 1 আমি I(icl>person) PRON, HPRON, SUBJ, 1P, SG Node 2 জানয়ুােী January(icl>month) N, VEND, #TIM Node 3 হইনে Null ABY, ANUS, #FRM Node 4 জনু June(icl>month) N, CEND, #TIM Node 5 পযযন্ত Null ABY, ANUS, #TO Node 6 ঢাকা Dhaka(iof>city) N, VEND, NPRO, #PLC, CAPT Node 7 য় Null BIV, KROK, 7TH Node 8 থাক Stay(icl>live) ROOT, CEND, CEG2, #PLC Node 9 তবা null INF, VI, 1P (where, N indicates noun, PRON indicates pronoun, HPRN denotes human pronoun, SUBJ for subjective pronoun, SG for singular number, #TIM indicates time related node, VEND for vowel ended, CEND for consonant ended, ABY represents preposition, ANUS also represents preposition, NPRO for proper noun, #PLC for place related node, INF for inflexion, VI for verbal inflexion, 7TH denotes case inflexion International Journal of Information and Education Technology, Vol. 4, No. 5, October 2014456for seventh number and Null represents no universal word). These nodes are processed by the EnConverter using the dictionary entries and analysis rules. TABLE II: ANALYSIS RULES TO CONVERT THE GIVEN SENTENCE INTO UNL EXPRESSIONS Rule 1. R{SHEAD:::}{PRON,HPRON,SUBJ:::}(BLK) Rule 2. R{PRON,HPRON,SUBJ:::}{BLK:::} Rule 3. R(PRON,HPRON,SUBJ:::}{BLK:::}{N,#TIM:::} Rule 4. DR{N,#TIM,^blk:blk::}{BLK:::} Rule 5. +{N,#TIM,blk,^ABY,^ANUS,^#FRM:@::}{ABY,ANUS,#FRM:::} Rule 6. R{:::}{N,#TIM,blk,ABY,ANUS,#FRM:-N,-ABY,-ANUS,+NP::} Rule 7. R{NP,#TIM,#FRM,blk:::}{BLK:::} Rule 8. R{BLK:::}{N,#TIM:::} Rule 9. +{N,#TIM,blk,^ABY,^ANUS,^#TO:@::}{ABY,ANUS,#TO:::} Rule 10. R{:::}{ N,#TIM,blk,ABY,ANUS,#TO:-N,-ABY,-ANUS,+NP::} Rule 11. R{NP,#TIM,#TO,blk:::}{BLK:::} Rule 12. R{BLK:::}{N,#PLC:::} Rule 13. +{N,#PLC,^biv,^krok,^7th:@::}{BIV,KROK,7TH:::} Rule 14. R{:::}{N,#PLC,BIV,KROK,7TH:-N,-BIV,-KROK,-7TH,,+biv+NP::} Rule 15. R{NP,#PLC,biv:::}{BLK:::} Rule 16. R{BLK:::}{ROOT:::} Rule 17. +{ROOT,CEND:@::}{BIV,KBIV,1P:::} Rule18. DL{BLK:::}{ROOT,CEND,BIV,KBIV,1P:-ROOT,-CEND,-BIV,-KBIV, +V, +biv, +kbiv::} Rule 19. >{NP,#PLC::plc:}{V:::} Rule 20. DL{BLK::}{V:::} Rule 21.</s>
<s>>{NP,#TIM,#TO::tmt:}{V:::} Rule 22. >{NP,#TIM,#FRM::tmf:}{V:::} Rule 23. >{PRON,HUMN,SUBJ::aoj:}{V:::} Rule 24. L{SHEAD:::}{V:::} Before applying rules sentence head (<<) places in the left analysis window (LAW) and pronoun ‘আমি’(aami) meaning I in the Right Analysis Window (RAW). Now right shift rule 1, 2 and 3 are applied to shift the windows of Enconverter to three steps right. Then right node deletion rule (rule 4) is applied to delete the node between noun ‘জানুয়ােী’ (January) and case maker ‘হইনে’ (hoite) meaning from. Rule 5 is used to perform morphological analysis between ‘জানুয়ােী’ and ‘হইনে’ to make noun phase ‘জানুয়ােীহইনে’ (january hoite) meaning from January. Three right shift rules 6, 7 and 8 are to be applied to shift the windows three steps right. Morphological analysis between noun ‘জনু’ (June) and case maker ‘পযযন্ত’ (porjonto) is to be performed using analysis rule 9. Again morphological analysis between noun ‘ঢাকা’ and case maker ‘য়’ (ye) will be performed using rule 13 after applying right shift rules 10, 11 and 12. After that three right shift rules 14, 15 and 16 are to be applied followed by an analysis rule (rule 17) to perform morphological analysis between verb root ‘থাক’ (thak) and verbal inflexion ‘তবা’ (bo). After completion all morphological analyses a left node deletion rule (rule 18) is to be applied to delete the node between noun ‘ঢাকা’ and verb ‘থাকনবা’. Considering the UNL relation plc a semantic analysis is be performed between place ‘ঢাকা’ and verb ‘থাকনবা’ using rule 19 and consequently noun ‘ঢাকা’ will be deleted from the node-list. Again another left node deletion rule (rule 20) is to be applied to delete the node between noun phase ‘জনুপযযন্ত’ and verb ‘থাকনবা’. To perform the semantic analysis between noun phase ‘জনুপযযন্ত’ and verb ‘থাকনবা’ rule 21 is to be used for making UNL relation tmt. In this step, verb ‘থাকনবা’ remains in the RAW and noun phase is deleted from the node-list. Another semantic operation will be resolved by tmf relation between noun phase ‘জানুয়ােীহইনে’ and verb ‘থাকনবা’ and ‘জানুয়ােীহইনে’ is deleted after applying rule 22 followed by left node deletion rule 20. Finally, aoj relation can be resolved between pronoun ‘আমি’ and verb ‘থাকনবা’ and pronoun is deleted followed by a left shift rule 24 to place sentence head, which has the attribute SHEAD in the RAW to complete the conversion procedures. Table III shows the UNL expressions of the sentence. TABLE III: UNL EXPRESSIONS OF THE GIVEN BANGLA SENTENCE {org:en} I will stay in London from January to June {/ogr} {unl} aoj(stay(icl>dwell>be,aoj>person,plc>uw).@entry.@future,i(icl>person plc(stay(icl>dwell>be,aoj>person,plc>uw).@entry.@future,london(iof> national_capital>thing)) tmf(london(iof>national_capital>thing),january(icl>gregorian_calendar _month>thing)) tmt(london(iof>national_capital>thing),june(icl>gregorian_calendar_m onth>thing)) {/unl} V. CONCLUSIONS AND FUTURE WORKS This paper analyzed various types of Bangla locative case sentences in favor of UNL structure considering the lexicon and UNL relations they create. It also proposed some analysis rules of all kinds of sentences to convert them into UNL expressions. By using the analysis rules we successfully converted locative case sentences into correct UNL expressions. Our future plan is to develop a mechanism which will allow users to</s>
<s>translate any kinds of locative case sentences into UNL expressions. These UNL expressions can later be converted to any other native languages using language specific generation rules. Currently, we are experimenting on other case sentences. Our analysis rules have been developed by using standard format provided by the UNL Center of the UNDL Foundation so that analysis rules of other languages can be benefited from our formats. Completion of other rules for all types of Bangla sentences will be a major step towards developing a generic Bangla language translation. REFERENCES [1] H. Uchida, M. Zhu, and T. C. D. Senta, “Universal Networking Language,” UNDL Foundation, International environment house, 2005/6, Geneva, Switzerland. [2] EnConverter Specification, Version 3.0, UNL Center, UNDL Foundation, Tokyo 150-8304, Japan 2002. [3] D. Vijay, “Generation of Hindi from UNL,” IIT Bombay, M Tech Thesis. [4] L. Giri, “Semantic net like knowledge structure generation from natural languages,” IIT Bombay, B Tech Dissertation 2000. [5] K. Deve et al., “Knowledge extraction from Hindi text,” Journal of Institute of Electronic and Telecommunication Engineering, vol. 18, issue 4, 2001. [6] T. Dhanabalam, K. Saravanan, and T. V. Geetha, “Tamil to UNL EnConverter,” in Proc. International Conference on Universal Knowledge and Langurage, Goa, India, 2002. International Journal of Information and Education Technology, Vol. 4, No. 5, October 2014457[7] N. Adly and S. Alansary, “Evaluation of Arabic machine translation system based on the Universal Networking Language,” in Proc. International Conference on Natural Language Processing and Information System, LNCS, 2009, pp. 243-257. [8] S. Dashgupta, N. Khan, D. S. H. Pavel, A. I. Sarkar, and M. Khan, “Morphological analysis of inflecting compound words in Bangla,” in Proc. International Conference on Computer, and Communication Engineering (ICCIT), Dhaka, 2005, pp. 110-117. [9] S. Gilles and B. Christian, “UNL-Frence deconversion as transfer & generation from an interlingua with possible quality enhancement through offline human interaction,” Machine Translation Summit-VII, Singapore. [10] H. M. N. Y. Ali, J. K. Das, S. M. A. A. Mamun, and M. E. H. Choudhury, “Specific features of a converter of web documents from Bengali to Universal Networking Language,” in Proc. International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 2008, pp. 726-731. [11] N. Y. Ali, M. A. Ali, A. M. Nurannabi, and J. K. Das, “Algorithm for conversion of Bangla sentence to Universal Networking Language,” in Proc. International Conference on Asian Language Processing, Harbin, China, 2010, pp. 118-121. [12] D. M. Shahidullah, “Bangala Vyakaran,” Maola Brothers Prokashoni, Dhaka, pp. 10-130, August, 2003. [13] D. C. S. Kumar, “Vasha-Prokash Bangla Vyakaran,” Rupa and Company Prokashoni, Calcutta, pp. 170-175, July 1999. Nawab Yousuf Ali was born in Rangpur, Bangladesh on April 05, 1966. He passed Secondary School Certificate from Rangpur High School, Rangpur, Rajshahi, Bangladesh in 1982 and Higher Secondary School Certificate from Carmichael College, Rangpur, Rajshahi, Bangladesh in 1984. He did his Master in Computer Science and Engineering from Lvov Polytechnic Institute, Lvov, Ukraine, USSR in1992 and obtained PhD in CSE from the Department of CSE, Jahangirnagar University, Dhaka, Bangladesh in 2012.</s>
<s>He is currently serving as an associate professor in the Department of CSE, East West University, Dhaka, Bangladesh. He research interest includes NLP, Universal Networking Language, Bangla text conversion to UNL. He has published one book, one book chapter, nine journal and 13 conference papers in national and international journals and conferences. Golam Sorwar obtained hisPhD in Information Technology from Monash University, Australia. He received his B. Sc. in computer science and Engineering from the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology. He is currentlyserving as a lecturer in the School of Commerce and Management at Southern CrossUniversity, Australia. He has published a lot of research papers in International Journals and Conferences. International Journal of Information and Education Technology, Vol. 4, No. 5, October 2014458[14] Undl. (Oct. 25, 2013). [Online]. Available: http://www.undl.org/[15] Unl. (Oct. 25, 2013). [Online]. Available: http://www.unl.ru/[16] DeConverter Specification, Version 2.7, UNL Center, UNDLFoundation, Tokyo 150-8304, Japan 2002.Mohammad Ameer Ali has completed his PhD in Information Technology from Monash University, Australia in 2006 while receiving his B. Sc. in Computer Science and Engineering from the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology in 2001. He is a full time faculty member of the Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh holding the position of associate professor. From January 2007 to August 2007, he was the assistant professor of Department of Computer Science, Daffodil International University, Bangladesh. His research interest includes image processing, fuzzy set theory, segmentation, vendor selection, telemedicine, networking, shape coding, video segmentation, mobile networking, etc.Mr. Ali is the program committee member of IEEE DMAI 2009, Australia.</s>
<s>Auto-correction of English to Bengali Transliteration System using Levenshtein DistanceSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/335932699Auto-correction of English to Bengali Transliteration System using LevenshteinDistanceConference Paper · June 2019DOI: 10.1109/ICSCC.2019.8843613CITATIONSREADS2155 authors, including:Some of the authors of this publication are also working on these related projects:Emotion Detection View projectDesign and Development of Precision Agriculture Information System for Bangladesh View projectFarhan LabibEast West University (Bangladesh)2 PUBLICATIONS 9 CITATIONS SEE PROFILEAmit Kumar DasEast West University (Bangladesh)39 PUBLICATIONS 290 CITATIONS SEE PROFILEAll content following this page was uploaded by Amit Kumar Das on 22 September 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/335932699_Auto-correction_of_English_to_Bengali_Transliteration_System_using_Levenshtein_Distance?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/335932699_Auto-correction_of_English_to_Bengali_Transliteration_System_using_Levenshtein_Distance?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Emotion-Detection-3?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Design-and-Development-of-Precision-Agriculture-Information-System-for-Bangladesh?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Farhan_Labib2?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Farhan_Labib2?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Farhan_Labib2?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/East_West_University_Bangladesh?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Amit_Das20?enrichId=rgreq-243315103e1716e49681df769360cad6-XXX&enrichSource=Y292ZXJQYWdlOzMzNTkzMjY5OTtBUzo4MDU3OTkwNzcxMDE1NjhAMTU2OTEyODg3OTU4Mg%3D%3D&el=1_x_10&_esc=publicationCoverPdf978-1-7281-1557-3/19/$31.00 ©2019 IEEE 2019 7th International Conference on Smart Computing & Communications (ICSCC) Auto-correction of English to Bengali Transliteration System using Levenshtein Distance Md. Mosabbir Hossain, Md. Farhan Labib, Ahmed Sady Rifat, Amit Kumar Das, Monira Mukta Department of Computer Science and Engineering East West University Dhaka, Bangladesh e-mail: mosabbirtarek7@gmail.com, farhan.labib4@gmail.com, sadyrifat@gmail.com, amit.csedu@gmail.com, monira.mukta7@gmail.com Abstract— The automated transliteration process is a function or software application that checks words against a computerized corpus to ensure that they are correct. A transliteration system is required either during the typing of text or when a user does not know the correct spelling of a word. The main objective of this research is to develop a system that is much better to check the spelling of the transliterated word by calculating Levenshtein distance. To make the mechanism more efficient and accurate unigram method has been implemented. Several techniques have been integrated for data collection to make the system more reliable and flexible. Almost twenty thousand words are included to create a data lexicon for this research work. This system is able to deal with signed or unsigned numeric values and float numbers with 78.13% accuracy. Keywords- Data Corpus, Data Processing, Levenshtein Distance, N-gram, NLP, Unigram, Transliterate Word. I. INTRODUCTION Since the dawn of civilization, people are using the writing method to express their thoughts and views. To expose one's feelings, it is the best way till now. Hence, to express verbally there is no need of checking spell. But in case of the composed method, it is a crucial issue to check the spell of each word of a sentence. Correct spelling helps the reader to understand the meaning instantly. Moreover, manual checking of a spell is difficult and time-consuming. So, the necessity of automatic spell checking is beyond description. For that purpose, natural language processing (NLP) is required to recognize the word or speech [1]. Words that the spell checker recognizes as incorrectly spelled are typically featured or underlined. With the assistance of NLP, an automatic spell checker will detect a word as errored or misspelled if that particular word is not matched with the corpus. The system not only detects but also corrects the word. Hence, it also suggests the appropriate word for a sentence. The primary spell checkers were verifiers rather than correctors. Even they did not suggest mistakenly spelled word. There may be some spelling checker for English</s>
<s>language but the Bengali language; it is sporadic. Moreover, approximately 189 million people speak in the Bengali language around the world. These people indeed use the transliterated word (Bangla mixed with English) to share their thoughts in writing purpose. In our research, we have developed a system where the system will distinguish the misspelled Bengali word and will recommend a suitable right word for the sentence calculating Levenshtein distance and using unigram strategy. For instance: if someone writes "amra besay zabo" then the result should show "আমরা বাসায় যাব" instead of "আ�া েবসায় যাব". Here, the word "besay" is misspelled. So, the goal of our research is to diminish the incorrect word and replace it with the appropriate word written in the correct spell. The remaining paper ordered as follows- Section II gives a compact description of some previous works related to this research area. Section III gives a simple overview of our working procedure. In Section IV, we briefly describe the methodology of our research work and provides information regarding dataset processing. In Section V, we present our result section by comparing the two methods. We discuss our future directions and conclude the paper with some useful suggestions in Section VI. II. RELATED WORK There is some framework that has introduced in this research field to check and correct the misspelling word. There are several techniques to develop this framework like: according to the string, statistical approach, rule-based framework, similarity keyword, probabilistic and so on. A statistical method to handle the errored word was adopted by Mays [2]. In that research, various data were collected, and those were transformed into misspelled sentences. Then calculate the probability of sentences by using maximum likelihood estimation of probability. Another study was done by Bidyut [3] where a confusion set was generated, and probability was calculated by using bigram and trigram technique. The weighted score detected the error. These works were done for the English language which was not applicable for the Bengali language. Hence, a few works have been accomplished on the Bengali language. A research was done on double metaphonic encoding for Bangla language spelling checker [4] where the system was complex and based on the consonant cluster. Moreover, it was not automated. Another work has been done for detecting and correcting words based on social data set by filtering messages [5]. Murthy, Akshatha, Upadhyaya & Kumar have done their research work on 2019 7th International Conference on Smart Computing & Communications (ICSCC)978-1-7281-1557-3/19/$31.00 ©2019 IEEEKannada spell checker with sandhi splitter [6]. Another work has been accomplished by Donghuilee and peng over Chinese language implementing Soundex algorithms [7]. After considering the above cases we resolved to build up the Bengali spell checker with more effectiveness and all more precisely in a simple way using unigram strategy. To develop Bengali spell checker, we deployed the Levenshtein algorithm to predict the correctly spelled word. III. PROPOSED MODEL Our framework is able to take an input of any transliterate word string which means Bangla interspersed</s>
<s>with English. For instance: "Ami bhat khai". Here, the sentence is written in transliterate word. This transliterates word sentence is be converted into Bangla sentence with the assistance of this system. For anticipating anything, the machine needs lots of data. So, a Bangla data corpus is added to the system where all the Bangla words are written incorrectly spelled. At the point when input is taken, it will be compared to the data corpus. For making a comparison, Levenshtein algorithm has been utilized. This algorithm calculates the distance between two sequences. Our proposed system will compare the input string with all of the similar data which are included in the data corpus. The minimum distance will be considered as the predicted result. This framework won't just anticipate the alphabetic spelling yet, also, think about the numeric esteem. For instance: If an input is taken as "dhatuti 500 degree Celsius-e uttopto ache" which implies the metal is heated in 500 degree Celsius will be resulted in the form of "ধাতুিট ৫০০ িড�ী েসলিসয়ােস উৎত� আেছ". Here, the numeric value will not be scattered or dissipated. It will also be incorporated into the system. Again, in case of floating point for numeric value, there will be no correction for those values [8]. For instance: "চােলর দাম �িত েকিজেত ১০. ৫০ টাকা বািড়েয়েছ।" Here, the program does not consider "." as a punctuation mark. Rather it considers as floating point and returns that as it is. When the system converts transliterate word, it converts "." to "।" as the meaning of "." is the ending point of the sentence. In Bangla, the ending point is indicated by "।". Even it can deal with "+" and "-" symbol such as: "কানাডায় আজেকর তাপমা�া -৪ িডি� েসলিসয়াস।" Here, "-৪" is a numeric value with special character "-". This will not be edited by the system. However, if a user gives input of some sentences with punctuation mark like "।", "?", "!", ",", ":", ";" then the system omits these marks for better calculation. Finally, when the correction process is done the system replaces the punctuation mark as the user has given in the input sentence. In brief, the system takes transliterate word as input and converts it into Bangla. Hence, split the sentence into words. Those words are compared to the data corpus computing distance. If the distance is above 65 percent or equal, then the word is added for suggesting. Else the system skips the word and will keep the same word. The following flow chart demonstrates the working procedure of our system: Figure 1. Working procedure of spell checker In Figure.1 the proposed system starts with converting the transliterated sentence into Bangla sentence using a text parser which alters Bangla written in the Roman inscription to its phonetic comparable in Bangla. After that, the Bangla sentence is split into individual words. Each word is checked with the data corpus. At the point when a word is found which is a numeric value, it</s>
<s>is straightforwardly added to the output string. Let's take a sentence like: “চােলর দাম �িত েকিজেত ১০ টাকা বািড়েয়েছ।” Here the word "১০" is a numeric value. When this type of numeric value appeared, then there will be no correction. However, in the case of non-numeric value, the word is sent to the data corpus. Subsequently, the program checks whether the word is available in the corpus or not. At that point, if the word is available, then program adds that word to the suggestion list. But if the word is not available in the corpus, then the program checks its data words and makes a correlation. As a consequence of correlation, if the score value is greater than or equal to 65 percent then that word from the data corpus appends to the suggestion list. Otherwise, it goes for the following word for making a comparison from the data corpus. After finishing the calculations with all words from 2019 7th International Conference on Smart Computing & Communications (ICSCC)the data corpus, it checks the suggestions list whether it is empty or not. If it is empty, then it keeps the same word as the user has given in the input data. On the contrary, if the suggestion list is not empty then it finds the maximum score from the suggested words. If the same scores appear for multiple words then it corrects the word by using unigram method. Otherwise, it simply selects the word having maximum score value and then it joins the word to the output string. Then it goes for the next input word. When the calculation is finished for all input words then the joined string will be replaced as predicted correct sentence [9]. IV. METHODOLOGY A. Levenshtein Distance The system is developed based on Levenshtein algorithms. It calculates the distance between two strings. For calculating the distances following equation was considered: 𝐿𝐿𝐿𝐿𝑣𝑣𝑎𝑎,𝑏𝑏(𝑚𝑚,𝑛𝑛)𝑚𝑚𝑚𝑚𝑚𝑚(𝑚𝑚,𝑛𝑛)𝑚𝑚𝑚𝑚𝑛𝑛�𝐿𝐿𝐿𝐿𝑣𝑣𝑎𝑎,𝑏𝑏(𝑚𝑚 − 1,𝑛𝑛) + 1)𝐿𝐿𝐿𝐿𝑣𝑣𝑎𝑎,𝑏𝑏(𝑚𝑚,𝑛𝑛 − 1, ) + 1)𝐿𝐿𝐿𝐿𝑣𝑣𝑎𝑎,𝑏𝑏(𝑚𝑚 − 1,𝑛𝑛 − 1) + 1(𝑎𝑎𝑚𝑚≠𝑏𝑏𝑛𝑛) (1) Here the function (am ≠ bn) denotes to 0 when am ≠ bn and equal to 1 otherwise, and leva,b (m, n) is the distance between the first m characters of a and the first n characters of b. It has to be noted that the rows on the minimum above lead to deletion, an insertion, and a substitution respectively. Levenshtein algorithms consider the minimum distance to suggest correctly spelled word. The higher the distances, the different the strings are. This technique also calculates the similarity ratio between two words using the following formula: (|𝑎𝑎|+|𝑏𝑏|)−𝐿𝐿𝐿𝐿𝑣𝑣𝑎𝑎,𝑏𝑏(𝑚𝑚,𝑛𝑛)|𝑎𝑎|+|𝑏𝑏| (2) Where |a| and |b| represents the length of the sequence of a, b respectively. If the two strings are 100% similar then it requires no change, the only substitution occurs. But if any string is found which do not match with the data corpus, then it will add the string to the corpus by erasing the misspelled word which refers to insertion and deletion. For example: ব ি◌ ক ◌া শ ে◌</s>
<s>র ব ি◌ ক ◌া স ে◌ র Edit distance = 1(িবকােশর, িবকােসর) ক ে◌ ন ে◌া ক ে◌ ন Edit distance = 1(েকন, েকেনা) প ন ◌্ য প ◌ু ন ◌্ য ে◌ র Edit distance = 3 (পনয্, পুেনয্র) The above scenario represents three methods of calculating edit distance. For the first case of substitution “িবকােসর” will be replaced by “িবকােশর” where edit distance is 1. Hence, for insertion “েকন” is inserted instead of “েকেনা” and for deletion “পনয্” was generated from the word “পুেনয্র”. This technique was deployed to measure the edit distance between two words. B. Data Processing To establish this framework various data are needed to make a data corpus for predicting or recommending any word. These data are collected automatically from various online Bangla newspapers by using ParseHub. As correctly spelled Bangla words are challenging to collect, so the source of data collection is chosen from online Bangla newspapers. ParseHub is a tool where lots of data is possible to extract from the various website within a short time. But all of those data are not pure data as English letter or some special character (like:" : ; \ / @ # $ % & ? | ! + - _ and so on) can be stirred up there. So, data filtering is needed to extract pure data from raw or garbage data. At the time of data filtering any brackets, English characters, special characters are removed [10]. For instance: “েনাভা ি� আই (NOVA 3i) �াটর্ েফােনর দাম কিমেয়েছ । ে�েনর বােসর্েলানায় অনুি�ত এ বছেরর েমলায় বড় চমক িছল েফাি�ং (Folding) বা ভাঁজ করা �াটর্ েফােনর পুনরায় আিবভর্ াব। এ ে�ে� বড় দুই মুেঠােফান িনমর্াতা সয্ামসাং (Samsung) ও হুয়াওেয়. (Huawei) িছল আেলাচনার েক�িব�ুেত। তােদর েদখােনা গয্ালাি� েফা� (Galaxy Fold) এবং েমট এ� �িতেযািগতায় �িত�ান দুিটেক এক ধাপ এিগেয় িনল।”. The above sentences will be converted into: “েনাভা ি� আই �াটর্ েফােনর দাম কিমেয়েছ । ে�েনর বােসর্েলানায় অনুি�ত এ বছেরর েমলায় বড় চমক িছল েফাি�ং বা ভাঁজ করা �াটর্ েফােনর পুনরায় আিবভর্ াব। এ ে�ে� বড় দুই মুেঠােফান িনমর্াতা সয্ামসাং ও হুয়াওেয় িছল আেলাচনার েক�িব�ুেত। তােদর েদখােনা গয্ালাি� েফা� এবং েমট এ� �িতেযািগতায় �িত�ান দুিটেক এক ধাপ এিগেয় িনল।”. After getting pure Bangla data in a paragraph format, the next step is to split the data into sentence format following the punctuation mark. For instance, the above paragraph is converted into: • েনাভা ি� আই �াটর্ েফােনর দাম কিমেয়েছ । • ে�েনর বােসর্েলানায় অনুি�ত এ বছেরর েমলায় বড় চমক িছল েফাি�ং বা ভাঁজ করা �াটর্ েফােনর পুনরায় আিবভর্ াব। Insertion Substitution Deletion 2019 7th International Conference on Smart Computing & Communications (ICSCC)Hence, these data are again split into words to make the data corpus. For data corpus, unique words will be selected. Let's consider the above sentences: • েনাভা ি� আই �াটর্ েফােনর দাম কিমেয়েছ । • চােলর দাম �িত েকিজেত ১০ টাকা বািড়েয়েছ। • আইেফান ৫ এর দাম কিমেয়েছ। Here the word “দাম” is appeared three times in all sentences and the word</s>
<s>“কিমেয়েছ” appeared two times in the first and last sentence. But only one time the word is appended to the data corpus so that each and every word remains different from each other. Thus, the data corpus will be created by selecting unique words. We use corpus rather than corpus because of the simplicity of sentence as Bangla language has complicated grammar. C. Unigram Approach Unigram approach deals with a single item from a sequence. It originates from the concept of the N-gram model. In N-gram model prediction occurs considering two or more than two (N= 2,3,4….) subsequent words. If the N-gram model chooses "N=2" then it is Bigram, and for "N=3" it is Trigram. Bigram approach considers two consecutive words for its prediction. For instance: “গতকাল সারািদন অেনক গরম পেড়িছল” Bigram: {(“গতকাল, সারািদন”), (“সারািদন, অেনক”), (“অেনক গরম”), (“গরম পেড়িছল”)} In Trigram, approach system thinks about three consecutive words for its prediction. For instance: “গতকাল সারািদন অেনক গরম পেড়িছল” Trigram: {(“গতকাল সারািদন অেনক”), (“সারািদন অেনক গরম”), (“অেনক গরম পেড়িছল”)} Words are the primary substance for sentences. The group of words provides more advantages to express the meaning of a sentence. From this perspective Bigram or Trigram approach seems better than the unigram approach as they deal with sentences. But the objective of this system is not to manage sentences rather than words. So, Bigram and Trigram approach is not suitable for this proposed system. The system concerns about individual word for making a suggestion from data corpus which is incorporated in the unigram approach. Thus, the unigram approach has been chosen to increase the performance of this framework. In the unigram approach, the system will consider only one single word (N=1) which has been used most frequent time in the data corpus. For instance: “আিম ভাত কাই” The above sentence is the set of words {আিম, ভাত, কাই}. The word “কাই” incorrect with that particular sentence. At this point, it is necessary to choose an appropriate word from the data corpus to make the sentence accurate. So, the unigram approach is implemented here to resolve the error. Hence, Unigram will find all of the words from the corpus which are related to the word “কাই”. For instance: {খাই, নাই, যাই, পাই, গাই, ভাই} Here the score value of the word “কাই” with all the suggested words are the same. After finding the same score value unigram considers that which word has been used most frequently or maximum times for the similar type of sentence. The highest score of utilization of that particular word is selected as a correctly spelled word. For instance: • খাই = [ 20 times] • নাই = [ 15 times] • যাই = [ 12 times] • পাই= [ 07 times] • গাই= [ 13 times] • ভাই= [ 05 times] Here, “খাই” is the word that appeared maximum times among all of the similar words. So, the unigram will choose this word for that sentence. This is how unigram filters the words. To generate unigram</s>
<s>around 10,000 sentences has been utilized. Those sentences consist of more than 1,26000 words. And the unigram set build for around 18,000 words. For applying the unigram approach, the following formula has been adopted: 𝑌𝑌(𝑤𝑤𝑖𝑖) = 𝑋𝑋(𝑤𝑤𝑖𝑖)𝑋𝑋(𝑤𝑤) (3) Here, “Y(wi)” represents the probability of the recommended word and it is a specific word. "X(wi)" is some times that the word occurred in the data corpus and "X(w)" stands for a total number of words in the data corpus for all words. V. RESULTS AND ANALYSIS To suggest any word from the data corpus, the system finds out the percentage of the similarity between two Bangla words. If the similarity is at least 65% or above then it considers that particular word for the suggestion. Either it adds the word in the corpus. The accuracy of this proposed system is calculated automatically. To estimate the accuracy, some functions were considered. Table. I represent the overall performance considering the unigram approach for this framework and Table. I represent the performance without considering the unigram approach. To establish the automatic function, the following formula has been considered: 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝑚𝑚𝐴𝐴𝐴𝐴 = 𝑇𝑇𝑇𝑇+𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇+𝐹𝐹𝑇𝑇+𝐹𝐹𝑇𝑇+𝑇𝑇𝑇𝑇 (4) Here, TP = True positive TN = True Negative FP = False Positive TN = False Negative 2019 7th International Conference on Smart Computing & Communications (ICSCC)Table I: Performance Evaluation Exp No. No. of input Words Correct Words Error Words Accuracy With Unigram Without Unigram with Unigram Without Unigram With Unigram (%) Without Unigram (%) 1 25 22 20 3 5 88.00 80.00 2 20 12 10 8 10 60.00 50.00 3 15 12 10 3 5 80.00 66.67 4 15 11 13 4 2 73.33 86.67 5 15 13 11 2 4 86.67 73.33 6 15 10 12 5 3 66.67 80.00 7 15 12 9 3 6 80.00 60.00 8 15 11 11 4 4 73.33 73.33 9 15 14 10 1 5 93.33 66.67 10 15 12 09 3 6 80.00 60.00 Total 165 129 115 36 50 Avg. Accuracy = 78.13 Avg. Accuracy = 69.67 In the evaluation scheme, more than 18,000 unique words are considered as training data. In Table. I 165 words are considered as test data where 129 are correct words based on the user's context. On the contrary, in Table. I 165 words are considered as test data where 115 are correct words based on the user's context. After executing the system for those ten experiments, the total error is found 36 words for Table. I and 50 words for Table. I. However, most of the words are also correct words but incorrect based on the user's context. Figure 2. Performance Evaluation After analyzing the performance of the system, it is found that the rate of average accuracy for Table. I give 78.13% utilizing the unigram approach and for Table. I without unigram it is 69.67% which is less than the unigram approach. Figure 2. demonstrates the performance evaluation of the unigram approach and without the unigram approach. Consequently, the Unigram Approach increases the</s>
<s>accuracy level and improves this proposed system efficiently with Levenshtein distance. The main factors that lead to the error in our result are- for regular use, a user generally use the lowercase letter for typing but in some Unicode mapping used Uppercase letter and on the other hand, user try to use shortcut form of a regular word. VI. CONCLUSION This research aimed to build up a framework that would almost certainly distinguish misspelled transliterated words and would recommend the correctly spelled word naturally. This spell checker is additionally ready to look the words alphabetically to lessen the proposal time. Since a massive amount of data are added in the lexicon to make suggestion and unigram approach extracts the appropriate word from those suggestion lists, so the accuracy level stands in a standard structure. By this way, the system corrects automatically. REFERENCES [1] G. Chowdhury, "Natural language processing," Annual Review of Information Science and Technology, vol. 37, no. 1, 2005, pp. 51-89. [2] E. Mays, F. Damerau and R. Mercer, "Context-based spelling correction," Information Processing & Management, vol. 27, no. 5, 1991, pp. 517-522. [3] P. Samanta and B.B. Chaudhuri, “A simple real-word error detection and correction using local word bigram and trigram,” ROCLING, 2013, pp 211-220. [4] N. UzZaman and M. Khan, "A Double Metaphone encoding for Bangla and its application in spelling checker," 2005 International Conference on Natural Language Processing and Knowledge Engineering, Wuhan, China, 2005, pp. 705-710. [5] Z. Z. Wint, T. Ducros and M. Aritsugi, "Spell corrector to social media datasets in message filtering systems," 2017 Twelfth International Conference on Digital Information Management (ICDIM), Fukuoka, 2017, pp. 209-215. [6] S. R. Murthy, A. N. Akshatha, C. G. Upadhyaya and P. R. Kumar, "Kannada spell checker with sandhi splitter," 2017 International Conference on Advances in Computing, Communications, and Informatics (ICACCI), Udupi, 2017, pp. 950-956. [7] D. Li and D. Peng, "Spelling Correction for Chinese Language Based on Pinyin-Soundex Algorithm," 2011 International Conference on Internet Technology and Applications, Wuhan, 2011, pp. 1-3. [8] J. Islam, M. Mubassira, M. R. Islam and A. K. Das, "A Speech Recognition System for Bengali Language using Recurrent Neural Network," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [9] R. A. Tuhin, B. K. Paul, F. Nawrine, M. Akter and A. K. Das, " An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques," 2019 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019. [10] A. K. Das, T. Adhikary, M. A. Razzaque and C. S. Hong, “An intelligent approach for virtual machine and QoS provisioning in cloud computing,” The International Conference on Information Networking 2013 (ICOIN), Bangkok, 2013, pp. 462-467. 1001 2 3 4 5 6 7 8 9 10Performance EvaluationWith Unigram Without Unigram2019 7th International Conference on Smart Computing & Communications (ICSCC)View publication statsView publication statshttps://www.researchgate.net/publication/335932699</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/320034051Data Extraction from Natural Language Using Universal Networking LanguageConference Paper · September 2017DOI: 10.1109/CTCEEC.2017.8454920CITATIONSREADS1624 authors, including:Some of the authors of this publication are also working on these related projects:Data Extraction from Natural Text View projectP. hd. Programm View projectDr. Aloke Kumar SahaUniversity of Asia Pacific27 PUBLICATIONS 54 CITATIONS SEE PROFILEM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEJ. K. DasJahangirnagar University26 PUBLICATIONS 61 CITATIONS SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 26 September 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/320034051_Data_Extraction_from_Natural_Language_Using_Universal_Networking_Language?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/320034051_Data_Extraction_from_Natural_Language_Using_Universal_Networking_Language?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Data-Extraction-from-Natural-Text?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/P-hd-Programm?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Jahangirnagar_University?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-30038d710157ecca5fe698de3bd21a81-XXX&enrichSource=Y292ZXJQYWdlOzMyMDAzNDA1MTtBUzo1NDI2OTcwMzE0NTQ3MjBAMTUwNjQwMDQ2Mzg2Mw%3D%3D&el=1_x_10&_esc=publicationCoverPdf978-1-5386-3243-7/17/$31.00 ©2017 IEEE. Data Extraction from Natural Language Using Universal Networking Language Aloke Kumar Saha1, M. F. Mridha1, Jahir Ibna Rafiq1 and Jugal Krishna Das2 1Dept. Of Computer Science and Enginerring, University of Asia Pacific, Dhaka, Banglaesh 1Dept. Of Computer Science and Enginerring, Jahangirnagar University, Dhaka mdfirozm@yahoo.com Abstract— Data extraction, which falls under the area of Natural Language Processing (UNL), finds specific data from unstructured data. This research paves the way to introduce a unique technique on data extraction – providing the user with exactly what is asked without any mimicry of unsolicited data. The proposal sets logical and symmetrical relation between the search criteria and operational data. Since the data is unstructured and volume can be relatively high, we have emphasized highly on putting the data under categories – defined and used by the researchers for further exploitation of data. Universal Networking Language (UNL) is efficiently used to compare data and merge. A new approach of machine learning is presented herein that essentially augments efficiency of Natural Language Computing (NLC) and Cognitive Computing (CC). This proposed approach uses UNL relationship and successful test data shows much improved results and efficient generalization. Existing machine learning approaches are widely used on numeric data which are producing expected results but one key contention is the limitation of data type that can be handled. Current models fail to properly train on the semantics, logical consistency; many natural language properties are either ignored or prove too much of a task. Consequently, the approach presented herein this paper carries further positive points in producing meaningful and worthwhile result. Moreover, complex data that are consisted of alphanumeric data, sequence and resulting criteria can be executed correctly. Keywords—Big Data; Data Extraction; NLP; Universal Networking; Natural Language Computing; Machine Learning. I. INTRODUCTION Machine Learning (ML) algorithms such as Probabilistic Learning, Neural Networks, Support Vector, Genetic Algorithms [1] are widely used. They are classified into Evolutionary Learning, Supervised, Reinforcement, and Unsupervised. They are taught through examples, various forms of numeric values, so that they can recognize these trends from their training once unfamiliar but similar data is given. They are provided with numeric values that represent certain features from target objects. For instance, a motion detector placed in front of a supermarket gate only opens when a car comes within ten feet of the gate. Here average car weights are feed in during</s>
<s>training so that door remains shut in case a ball rolls near the gate. These numeric values are important property in recognizing data but they are not sufficient for most natural language and their further beneficial applications. At present more and more scientific and research work is being carried out on the behest of machine functioning as human, giving out proper answer and immaculately executing repetitive tasks. It is more common in the field of numbers. But it leaves out data that are not purely numeric, as of now. Many machine-learning algorithms are limited to one or two set of data types. But dealing with multiple data type is more realistic because real-life data tends to consist of number as well as instructions in plain language; it is quite common to receive signal that are mixed type data on top of plain language in alphabet, they could be alphanumeric data and special characters like mathematical formula. Currently we can hardly focus on data analysis, nitpick the data asked by any customer or sort data on the basis of simple query. Natural Language (NL) based applications should find the exact linguistic meaning of words in a sentence in order to correctly analysis a query presented in a plain language – for instance, searching for operational amplifiers in a search engine. Indeed, better understanding of words formation and meaning from the context of sentence are important, so are logical steps in the questions. Once these are achieved, data extraction based on query from unstructured data i.e. data that are not homogeneous in nature and formed through many combinations of data types like numbers, alphabet, characters are possible. Properly replying to questions, provide meaningful summary, translation from language to language are only also possible. But many numeric driven ML are simply not on par with these task. Hence, a new approach of machine learning algorithm needs to be developed. Recent research shows that growing demand of text data has eclipsed that of numeric data type. On the Internet, 80% of the data are text and the remaining 20% data are numeric [2]. Sorting out this huge amount of text and properly find out required text is an important aspect. Hence, this paper suggests ML algorithm to efficiently learn NL semantics. This algorithm can train on logic and semantics, resulting in rewarding improvement in output data and data analysis as part of query in different forms and sizes. The organization of this paper is as follows: In Section 2, we describe the Literature Review, Section 3 has the short description of Data Extraction Method, Section 4 describes the UNL system, Section 5 depicts the proposed model, Section 6 Proceedings of International Conference on Current Trends in Computer, Electrical, Electronics and Communication" (ICCTCEEC)08-09 September, 2017, Karnataka, Indiademonstrates our Results and finally, Section 7 draws the curtains by concluding amid some heeds towards our future work. II. LITERATURE REVIEW Data extraction from natural language is the most previous approaches to Natural Language Generation from semantic network</s>
<s>which was discussed with verbalizing ontologies [1]. The textual realisation of natural language has paid relatively low attention. Our main interest on finding the data which are generally found in natural language format [2]. Generating UNL expression from natural language like Bangla to UNL [3,4] that is used for finding the target data extraction. Data extraction from trained data (UNL expression), in our proceeding parallel text-knowledge is derived from templates and learning is mostly done through it [5 ,6]. The text data is aligned in a trained database in a 2-step process. Starting with a system process in finding out best matched data that are in string format – the larger the better and as it appears in the unstructured data. Followed by gradual building of a statistical language model which in terms utilizes the entropy of the original query and confirms the length of matched data. Here emphasis is put on generating natural language that closely expresses a matched case in the source data with no prior knowledge of the data. Obtaining this level of accuracy depends how closely we can find an entity that has the property of traceability and matches our query or text input. Where such sentences are not directly derivable from the text, it is possible to modify them to make them transferable[7,8,9]. We adopt a syntactic pruning approach inspired, where sentences are first parsed and then the resulting structures are simplified by applying hand-built rules and filters [10,11,12]. III. DATA EXTRACTION METHOD Processing natural language is not sufficient based on machine learning algorithms, which are driven by regression, classifications, and matching data when the value is simply presented in numeric form. This is also applicable for understanding and computing of natural language and cognitive computing. Semantics is an inherent and vital property especially when computation of natural language and cognitive learning are involved; it has to be stressed during any analysis by the machine. In other words, the proper meaning of the sentence will largely be dependent on how accurately the semantics are computed by the machine on the context of the sentence. Machines should be taught and trained about semantics of words. The goal should be recognizing new semantics based on training. This is specifically a requirement for cognitive computing. The task given to machine will have to be completed on the basis of the nature of the query. The appropriate meaning of the sentence lies on the logic used to ask the question. Hence, machine learning for unstructured data should be capable of reasonable logical progression and the actions must reflect the logic. In stark contrast to currently used model of feeding numeric data for training, machine learning for the unstructured data needs to focus on semantic of words and logic. This proposed model is on par with the properties required for successful derivation of semantics and logic. Instead of feeding big chunk of datasets during the training, the proposed learning model is semantic driven; it is trained on logical connection and</s>
<s>explanation of the action to be taken. This is akin to the learning model of human being. Our learning is effective when logics are correctly presented and well understood by the accepting individual rather than committing to memory. By correctly learning single logic, all homogenous logical work can be inferred and executed. On the other hand, conventional machine learning by numeric data (only numbers) follow the path of feeding a large amount of examples but it conveniently leaves out any explanation or logic behind the task. Therefore, it can only execute tasks, which are similar in nature and produce result. The limitation lies in erratic data, data that may not conform to the example and has properties with limited functionalities. A key factor is how quicker this algorithm is – as far as learning and training is concerned; semantics are instilled while computation is carried on. Both the procedures are conducive to learning and ultimately deriving correct data, bearing in mind that, they are equally important and closely interrelated. In majority of the cases, learning is done through computing except when new semantics are enacted. However, as newer semantics are proposed and existing semantics research provide us with new knowledge, learning is often differentiated from computing. This paradigm also incorporates the refined meaning of words, mainly from Word Feature (WF) and to some lesser extent from Word Knowledge (WK) tables and NL corpora. As new paradigm offers to transfer teaching method from examples to semantics and logic, it produces better result in generalization and identified as a key improvement. Because here computing and learning are driven by logical progress and words are divulged in their contextual meaning, there is little appetite for larger data set such NL corpora. While proposed model is independent of such datasets, many other models such as Probability based N-grams are very much dependent on datasets. Machine Learning in NLP has been tremendously successful in training and computing. It can capitalize on the new paradigms mentioned herein, often learns as we learn as a person and hence, produces result very similar to human judgment and succeeds in giving right answers. The procedures are efficient – ML in NLP takes command as to what is the full query, then divides it according to parts of action or sequences are produced, finally the most accurate result is assumed as correct answer and shown. Let’s examine the following examples to clarify the procedures. [Example -1] “Please give me celebration picture of last Sunday’s FIFA world cup finale from twitter.” The working steps of ML in NLP will be a. Go to twitter website. b. Prompts user to login. c. Determine the exact date of last Sunday considering the date of today. d. Using hast tag of key words #Celebration #FIFA #Finale, find out the most relevant photos. e. Produce result where the photos are placed according to the likes that they received. Higher likes receive higher position – thus popular pictures are shown first. [Example -2] “Please show me</s>
<s>the statistics of people passed away from accidents in M25 motorway in the UK.” The working steps of ML in NLP will be a. Keywords are identified as UK, M25 highway, accident, death and so on. b. Here, search engines like Google are preferable to look for appropriate data. c. Go to Google or similar search engine and search of all combination of key words. d. Results are cross-checked and verified as they appear from different sources. Most matched data are taken as correct data. e. Result is produced for user. IV. UNL SYSTEM The structure of UNL system consists of three parts namely the Universal Words, UNL Attributes and UNL Relations. Universal word is an English word which is represented by nodes in a hypergraph [13]. Nodes associated with a sentence are connected by a relation known as UNL relation. Each universal word has some attributes that uniquely specifies that word and is placed according to a conceptual hierarchy derives from a UNL knowledge base. However, each of the universal words is comprised of headword along with some constraints list. The headword is considered as the unit form of the English word, known as label, whereas each of the constraints in a constraint list of the universal word corresponds to a concept of that word. The attribute lists associated with the individual universal word are used to represent the subjectivity of word based on their grammatical properties [13]. Fig.1 shows the structure of UNL. Fig. 1. Structure of UNL V. PROPOSED MODEL For finding the target data, we have used the compatibility property of sentence in UNL platform. The relation between the words of a sentence according to the meaning is called compatibility or propriety. As an example: if we work with the sentence: িsমার পািনেত চেল (Steamer moves on water) then there are some meaningful relation between the words as the boat can float in water. But if we consider this sentence: িsমার আকােশ চেল( Steamer moves on sky), then it doesn’t make any sense. As there is no meaningful relation between boat and sky. That’s why the last sentence has lack of compatibility. In order to become a perfect sentence, it must have the quality of compatibility. Fig. 2. Proposed UNL based data extraction model In order to check the compatibility for a sentence, we build tables and populate data there for making relationships if two words have any relation between them. We considered the rules’ common property as the elements of the tables. Such as if we are talking about the pronouns of human, then it comes - I, We, He, She, They etc. In UNL these types of pronouns are grouped together in a common property. So, we then just need a very few rows and column comparing the table having each subjects objects and verbs from the real world. We build a table using UNL relation[4,13] for checking if the subject, object and other words have any perfect relation Fig. 2 shows the Proposed</s>
<s>UNL based data extraction model. Now, for checking the compatibility, after a sentence passes the previous requirements, we will find the UNL rule for the sentence. Then from there, we will look up to the tables to find if the relation between subject - verb, object - verb and subject - object all are true. If we find all the relation tables cell of this three are true, then we can just say that this sentence has perfect meaningful relation between words. Therefore, this sentence has the quality of compatibility. Let us explain an example for more details. We want to find the data “where the cycle move?” the correct answer will be “on road”. We consider a sentence গািড় রাsায় চেল (the car moves on the road). In our proposed model the natural language first convert into UNL expression and then using the UNL relation we will find our target data. From the UNL module we will find the following output: {unl} obj(move(icl>occur,equ>displace,plt>thing,plf>thing,obj>thing).@entry.@present,car(icl>wheeled_vehicle>thing)) plc(move(icl>occur,equ>displace,plt>thing,plf>thing,obj>thing).@entry.@present,street(icl>thoroughfare>thing).@def) {/unl} Now, we will look up to the table. In the “obj relation table” [4,13], we will find the value true in the intersection cell of “icl>wheeled_vehicle>thing” and “icl>occur” or “equ>displace”. In the “plc relation table”, we will also find true value in the intersection cell “icl>thoroughfare>thing” and “icl>occur”. Therefore, for the all two combinations, we can find true value, for which we can say that the sentence has meaningful relations among the words and we can find our target data. Again, if we consider another example: গািড় আকােশ চেল( the car moves on the road). The corresponding UNL expression will be: {unl} obj(move(icl>occur,equ>displace,plt>thing,plf>thing,obj>thing).@entry.@present,car(icl>wheeled_vehicle>thing).@def) plc(move(icl>occur,equ>displace,plt>thing,plf>thing,obj>thing).@entry.@present,sky(icl>atmosphere>thing).@def) {/unl} Now, here, we will find one cell containing true value. For the intersection cell of “icl>wheeled_vehicle>thing” and “icl>occur”. But the intersection cell in the “plc relation table” of “icl>wheeled_vehicle>thing” and “icl>atmosphere>thing” doesn’t contain true value. So, from here, we will come to the decision that this sentence has no meaningful relations between the words. Then we unable to find the target data. VI. RESULT ANALYSIS In this proposed model, natural languages are taken as input and it is converted in UNL expression and then its semantic relations are compared the target data. The accuracy of data extraction is about 93.5% was achieved from our proposed model. During the experiment, it has been scrutinized that the accuracy was getting diminutive when the researchers have tested more than 5000 sentences. The output generated by our system is given in table 1 below: TABLE 1. ACCURACY CALCULATION OF SENTENCES Total Bangla words and morphemes in word dictionary 20000 Correct words found 18700 Percentage of accuracy 93.5 VII. CONCLUSION Unstructured data and mixed data type presents a unique challenge but it remains quite vital to explore these data types in regards to semantics learning, because they are already occupying a major share in the type of data that we have to deal with. In this work, UNL based data extraction has been developed that can be used for searching target data. It is</s>
<s>confirmed currently Big Data is four-fifth unstructured data – a big majority are either text or alpha-numeric. Many fundamental operations like text based search can really improve how NLC and CC work towards finding data that are of interest rather than returning huge amount of similar but mostly unrelated data. References [1] Liang, S., R. Stevens, D. Scott, and A. Rector (2012). OntoVerbal: a Proteg´e plugin for verbalizing ontology classes. In Proceedings of the Third International Conference on Biomedical Ontology. [2] Heath, T. and C. Bizer (2011). Linked Data: Evolving the Web into a Global Data Space. Synthesis Lectures on the Semantic Web: Theory and Technology. Morgan & Claypool. [3] F. Mridha, Molla Rashied Hussein, Md. Musfiqur Rahaman, Jugal Krishna Das “A Proficient Autonomous Bangla Semantic Parser for Natural Language Processing”, ARPN Journal of Engineering and Applied Sciences, VOL. 10, NO. 15, AUGUST 2015,ISSN 1819-6608, pp 6398-6403. [4] M. F. Mridha, Aloke Kumar Saha, Md. Akhtaruzzaman Adnan, Molla Rashied Hussain and Jugal Krishna Das,”Design and Implementation of an Efficient Enconverter for Bangla Language” ARPN Journal of Engineering and Applied Sciences, VOL. 10, NO. 15, AUGUST 2015,ISSN 1819-6608, pp 6543-6548. [5] Muhammad F. Mridha, Aloke Kumar Saha, Mahadi hasan and Jugal Krishna Das,“ Solving Semantic Problem of Phrases in NLP Using Universal Networking Language (UNL) “(IJACSA) International Journal of Advanced Computer Science and Applications, Special Issue on Natural Language Processing(NLP) 2014. [6] Duboue, P. A. and K. R. Mckeown (2003). Statistical acquisition of content selection rules for natural language generation. In Proceedings of the 2003 conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 121–128. [7] Aloke Kumar Saha, Muhammad F. Mridha, Shammi Akhtar and Jugal Krishna Das,“ Attribute Analysis for Bangla Words for Universal Networking Language(UNL)”, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No.1, 2013. [8] Cohn, T. and M. Lapata (2009). Sentence compression as tree transduction. Journal of Artificial Intelligence Research 34, 637–674. [9] Filippova, K. and M. Strube (2008). Dependency tree based sentence compression. In Proceedings of the Fifth International Natural Language Generation Conference, pp. 25–32. Association for Computational Linguistics. [8] Gagnon, M. and L. Da Sylva (2006). Text compression by syntactic pruning. Advances in Artificial Intelligence 1, 312–323. [9] Hewlett, D., A. Kalyanpur, V. Kolovski, and C. Halaschek-Wiener (2005). Effective NL paraphrasing of ontologies on the Semantic Web. In Workshop on End-User Semantic Web Interaction, 4th Int. Semantic Web conference, Galway, Ireland. [10] Klein, D. and C. Manning (2003). Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pp. 423–430. Association for Computational Linguistics. [11] Mendes, P., M. Jakob, and C. Bizer (2012). DBpedia: A multilingual cross-domain knowledge base. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2012). [12] Stevens, R., J. Malone, S.Williams, R. Power, and A. Third (2011). Automating generation of textual class definitions from OWL to English. Journal of Biomedical Semantics 2(Suppl 2), S5. [13] Universal Networking Language (UNL) Specifications Version 2005, http://www.undl.org/unlsys/unl/unl2005/. View publication statsView publication statshttps://www.researchgate.net/publication/320034051</s>
<s>/ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic</s>
<s>/Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman</s>
<s>/NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped</s>
<s>/False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/341043771An Emperical Framework of Idioms Translator From Bengali to English: RuleBased ApproachPreprint · April 2020DOI: 10.13140/RG.2.2.30728.98568CITATIONSREADS1245 authors, including:Some of the authors of this publication are also working on these related projects:Bangla Keyboard Layout design based on N-grams of Bangla Alphabets View projectSecurity in Industry 4.0 View projectAyesha KhatunChittagong University of Engineering & Technology20 PUBLICATIONS 17 CITATIONS SEE PROFILEMd Gulzar HussainGreen University of Bangladesh19 PUBLICATIONS 11 CITATIONS SEE PROFILEMd. Jahidul IslamGreen University of Bangladesh24 PUBLICATIONS 18 CITATIONS SEE PROFILESumaiya KabirGreen University of Bangladesh13 PUBLICATIONS 11 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Gulzar Hussain on 30 April 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/341043771_An_Emperical_Framework_of_Idioms_Translator_From_Bengali_to_English_Rule_Based_Approach?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/341043771_An_Emperical_Framework_of_Idioms_Translator_From_Bengali_to_English_Rule_Based_Approach?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Keyboard-Layout-design-based-on-N-grams-of-Bangla-Alphabets?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Security-in-Industry-40?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Chittagong_University_of_Engineering_Technology?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam1156?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam1156?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam1156?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-b7f9242b57f8cf06e4d1105a728579b6-XXX&enrichSource=Y292ZXJQYWdlOzM0MTA0Mzc3MTtBUzo4ODU5NzMxNjQ0OTQ4NTBAMTU4ODI0Mzg3MTIzMg%3D%3D&el=1_x_10&_esc=publicationCoverPdfAn Emperical Framework of Idioms TranslatorFrom Bengali to English: Rule Based ApproachAyesha Khatun∗, Md Gulzar Hussain†, Md Jahidul Islam‡, Sumaiya Kabir§, Md Mahin¶Department of Computer Science & Engineering,Green University of Bangladesh, Dhaka, Bangladesh.ayeshankhatun@gmail.com∗, gulzar.ace@gmail.com†, jahidul.jnucse@gmail.com‡,summa.cse@gmail.com§, mahin@cse.green.edu.bd¶Abstract—Idioms are taking a vital part in effective com-munication as well as a crucial part of cultural inheritance.It represents the group of words together have the meaningwhich is different from an individual word meaning, for thismetaphorical behavior idioms arise difficulties in the generalmachine translation system. In this paper, we have proposed aframework for translating Bengali to English. Context sensitivegrammar rules are created for parsing. The top-down algorithmis used for parsing the sentences. We have proposed an algorithmfor translating idioms in sentences. The proposed system is imple-mented and tested with about 15000 sentences. The performanceanalysis of the system gives 85.33% accuracy, which is quitesatisfactory.Keywords—Bangla Machine Translator; Idioms; Bangla Lan-guage Processing (BLP); Left corner parsing algorithm.I. INTRODUCTIONAn Idiom is a commonly used word or sentence that impliessomething other than its metaphorical sense. Idioms convey aspecific feeling and a specific tone for a language. Due to theircommon use, idioms can be recognized. Machine Translation(MT) relates to the application of computers, which is capableof translating the source language into target languages. Thisprocess generally does not have any human intervention. Totranslate vast quantities of data containing millions of wordsthat might not be translated traditionally, MT techniques areused. This makes MT, a challenging task in the field of NaturalLanguage Processing (NLP).Native Bangla speakers are growing day after day byspeaking and hearing idioms. It also implies for native Englishspeakers. Idiom plays a vital role in the culture of differentlanguage speakers. In this modern age, it’s very importantto share knowledge and culture between different regions.But due to language barrier Bangladeshi’s are not getting theadvantage of learning the various culture. To overcome thisbarrier Bangla Language Processing can play an importantrole. Our Idiom translator will be able to help Bangali peopleto understand idioms in the English language, which will helpthem to adopt their culture and break the cultural barrier.Generally, the MT model follows three main phases ofparsing, transferring and generation. But our idiom translatorfollows four stages, which are idiom translator, parser, transfer,and generation. Idiom translator checks the idiom part inthe sentence and translates it. Parser gathers the syntacticinformation of the sentence using Context Free Grammars(CFG). In the</s>
<s>transfer stage, rules are transferred from sourcelanguage to target language. And finally, the targeted sentenceis generated in the generation stage. As idioms do not signifythe literal meaning of the words used, it is hard to translateidioms from source language to target language.The rest of the paper is organized as follows: Section IIdiscusses related works. Methodology is discussed in SectionIII and it illustrates a sample following our proposed method-ology. Section IV demonstrates the result and discussion andfinally Section V refers the conclusion.II. RELETED WORKResearch on the processing of natural language started in the1950s. In the late 1980s, the first statistical machine translationsystems were developed [1]. Till now many works are donein English language. Authors of [2] developed a Japanese-English machine translation system which was supported bythe Japanese government’s science and technology agency.The system applies many structural transformations during thetransfer phase and generation phase to relieve the structuraldifference of the same contents and avoid ellipsis problems.Authors of [3] proposed an unsupervised Neural MachineTranslation (NMT) system for translating English to Germanand German to English news.Machine translation from Bangla language to other lan-guages is in initial step now. Many works are done recentlyon Bangla to English or vice versa. A phrase-based StatisticalMachine Translation (SMT) approach is proposed in [4]. Intheir work Out-of-Vocabulary (OOV) words are also handled.Authors of [5] proposed a rule-based transfer approach. Theyproposed an algorithm for searching the word from the lexiconand searching lexicon is made efficient by an intelligentinteger based lexicon system. NLP techniques used to translateEnglish to Bangla sentences in [6]. The context-free grammarused to validate the syntactical structure of a sentence andbottom-up approach is used to parse sentences. They used 50sentences for every tense. In [7] they proposed a verb basedmachine translation approach for English to Bangla. Theyidentified the main verb and make a simple form of Englishsentence. Then they easily translate it into Bangla. Authorsof [8] also proposed context-sensitive grammar to translateBangla to English. Bangla sentences including assertive, in-terrogative and imperative sentences. Set of context-sensitivegrammar rules are proposed to translate imperative, optativeand exclamatory Bangla sentences to English [9]. A newtechnique with a set of context-sensitive grammar rules is pro-posed to parse any Bangla sentences with imperative, optativeand exclamatory Bangla sentences in [10] where moods gotimportance than the structure of sentence. They are generatinga parse tree according to the sentences category.They used400 sentences and got an accuracy of 81%. Authors of [11]work to find the appropriate verb according to the tense andsubject. A procedure for finding semantically valid verb isproposed. They worked with verb root and different algorithmsare proposed in this paper.Maximum MT systems translate Bangla sentences to corre-sponding English sentences but we found only one of themincludes idioms [12]. This paper presents, in addition toEnglish, a multi lingual parallel idiom data set for seven Indianlanguages, and shows its relevance for two NLP applications.A set of CSG rules is proposed for our MT system to translateBangla sentences with idioms to it’s corresponding Englishsentence. Maximum work does not show the architecture ofprocedure of translation idioms and work with fewer data.In</s>
<s>this system we proposed an architecture for translatingsentences.III. PROPOSED METHODOLOGYIn this propose system we have ten modules, the modulesare idioms checker, idioms translator, tokenizer, rule gener-ator, database, parser, target language rules, source languagerules, machine translator and generator. Firstly, we consider aBengali sentence ”এই সমােজ বৃ লােকরা অচল পয়সা” as inputof the system. Step by step procedure is given in Fig. 1.Fig. 1. Workflow of proposed systemA. TokenizerThe main task of the tokenizer module is to split sentencesinto unit strings. It is like a database system of words withcorresponding Parts of Speech (POS) tag. Suppose for theinput sentence ”এই সমােজ বৃ লােকরা অচল পয়সা ”, the outputwill be like "এই”,“সমােজ”, “বৃ ” ,“ লােকরা”, “অচল", "পয়সা”.After tokenizing the sentence, tokens will be going to idiomschecker.B. Idioms CheckerThe main task of idioms checker is to check the idiomsin the sentence by using Idioms checker algorithm which isAlgorithm 1. In idioms dataset when wi =অচল, where অচলalso find in idioms dataset di, then it will find the next wordwi+1 =পয়সা, then concat the string k = stringConcat(অচল,পয়সা). Now idioms di = is equal to ki+1 as idioms found indataset so it will go to the next step Idioms translator if it doesnot find any term then it will concat the string up to i = 5 andthen go to parser. If the sentence contains any idioms, then itwill go to the idioms translator. For example,"এই সমােজ বৃলােকরা অচল পয়সা" as "অচল পয়সা" is an idiom, it will go tothe idioms translator module.Algorithm 1: Algorithm for Idioms Checker1. If wi is equal to split of di;2. Find wi+1;3. Function mPairWord(w1, w2, .....wn);4. k = function stringConcat(w1, w2, .....wn);5. for i = 0 to idioms dataset length doif k == di thengo to 6;break;elsego to 7;endend6. go to idioms Translator module;7. go to Parser module;C. Idioms TranslatorThis translator translates the idioms into its original mean-ing. As the idioms checker find that the sample input sentencehas idioms“অচল পয়সা”, after that this module translatesthe idioms into its corresponding meaning“মূল হীন”. Aftertranslating idioms, it goes to parser module as shown in FigFig. 2. Module of Idioms TranslatorD. DatabaseDatabase module is just like a dictionary which contains thelexicon or token of a sentence and the related POS tag. Forexample, in this sentence the pos tag of corresponding wordsare,“এই”→ PN, “সমােজ”→ N, “বৃ ”→ Adj, “ লােকরা”→ N,“মূল হীন”→ Adj. In this system, it has another table whichhas a set of Bangla idioms and its meaning. Table I shows theIdioms Table.TABLE IBANGLA IDIOMS TABLEIdioms (di) Meaning (mi)অচল পয়সা মূল হীনঅকালকু া অপদাথইচঁেড় পাকা অকালপউ ম-মধ ম হারএলািহ কা িবরাট ব াপারTABLE IIBANGLA CSGS RULESRule No Bangla CSGs Rules1 S → NP VP2 NP → N (Biv) (Adj)3 NP → N (Aux) (PP)4 NP → NP NP5 NP → (PN) N (Biv) (Adj)6 NP → (Adj) N (Biv)7 NP → (Qnt) (PP) N — PN8 NP → N9 PP → Null10 V → Null11 VP → V12 VP → (Adj)13 VP → (NP) VP14 VP → V (Aux)15 Adj → বৃ</s>
<s>, ভাল, অমূল , খারাপ, . . . .16 PN → এই, আিম, আপিন, তুিম, . . . . .17 N → চার, হার, লাক, সমাজ,. . . . .18 V → হয়, ছাড়ল, পড়া, খাওয়া, . . . .19 Biv → টােক, এরা, এ, . . . .20 Aux → িদেয়, পের, কের, . . . . .21 Qnt → একিট, পাচিট, . . .E. Rule GeneratorThe main purpose of the rule generator module is togenerate the grammatical rules of Bangla sentences. For trans-lating, the sentences, this module generates Context-SensitiveGrammar (CSG) rules. For this input sentence and this is builtwith the help of rules, these sentences need those rules NP→ N (Biv) (Adj), S → NP VP, NP → (Qnt) (PP) N — PNfor generating the parse tree. Sample CSG of Bangla simplesentences is listed in Table II.F. ParserGraphical view of the grammatical structure of the sentenceis called the parse tree. Parser module helps to generate theparse tree of a sentence by using CSG rules and lexicon. Weused left corner parsing algorithm to parse the sentence. Thismodule generates the parse tree for the input sentence“এইসমােজ বৃ লােকরা মূল হীন” which is shown in Fig. 3.G. TransferThe task of the transfer module is to translate Banglasentence to English language. The grammatical rule for trans-forming of grammar rule is listed in Table III. Using thisgrammar rules and transformation algorithm, we can get parsetree of English sentence, which is shown in Fig. 4.Fig. 3. Representation of Bangla parse treeTABLE IIIENGLISH CSGS RULESRule No English CSGs Rules1 S → NP VP2 S → VP NP3 NP → NP NP4 NP → Det N5 NP → (PP) N (Adv)6 NP → (PP) (PN) (Det) N7 NP → Adj N8 NP → Qnt N9 NP → N10 NP → PN11 NP → (Aux) N12 VP → V13 VP → V (Adj)14 VP → VP NP15 VP → V (Gr) (N) (Adj)16 VP → Aux V17 N → thief, beating, society, person,18 PN → this, that, I, She,...19 V → release, are, like, eat, go,...20 Adj → old, priceless, bad, good, ...21 Aux → do, are, is,..22 PP → in, on, to,..23 Det → the, a, an,24 Gr → ingFig. 4. Representation of English parse treeThe transformation process is divided into two part, ruletransfer and lexicon transfer. The process of transforminggrammar from source to target or from target to sourcelanguage is shown in Table IV.IV. EXPERIMENTAL RESULTTo assess the efficiency of our proposed system, we haveevaluated the system with about 15000 distinct types ofsentences with distinct sentence lengths. We collected thesesentences from various books, websites, Bangla grammarbooks, Bangla text books etc.TABLE IVTRANSFORMATION OF TARGET TO SOURCE OR VICE VERSAA. ImplementationFor executing the system, we used, Windows 10 as theoperating system, Java Swing to build the user interface, Javaas the programming language, and NetBeans 8.2 as IDE. Thesnapshot of our implemented proposed MT system for thesentence “এই সমােজ বৃ লােকরা অচল পয়সা” with idioms isgiven in Fig. 5 where Google translator do not show theappropriate transformation, given</s>